Test Report: Docker_Windows 22332

                    
                      56e1ce855180c73f84c0d958e6323d58f60b3065:2025-12-27:43013
                    
                

Test fail (3/349)

Order failed test Duration
52 TestForceSystemdFlag 563.03
53 TestForceSystemdEnv 529.71
58 TestErrorSpam/setup 42.89
x
+
TestForceSystemdFlag (563.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-637800 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-637800 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 109 (9m14.0183265s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-637800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-637800" primary control-plane node in "force-systemd-flag-637800" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:44:45.920893    8368 out.go:360] Setting OutFile to fd 696 ...
	I1227 20:44:45.982890    8368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:44:45.982890    8368 out.go:374] Setting ErrFile to fd 1980...
	I1227 20:44:45.982890    8368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:44:45.996889    8368 out.go:368] Setting JSON to false
	I1227 20:44:45.998895    8368 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3672,"bootTime":1766864613,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1227 20:44:45.998895    8368 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1227 20:44:46.002886    8368 out.go:179] * [force-systemd-flag-637800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1227 20:44:46.005884    8368 notify.go:221] Checking for updates...
	I1227 20:44:46.006891    8368 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 20:44:46.008888    8368 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:44:46.011888    8368 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1227 20:44:46.014886    8368 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:44:46.021885    8368 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:44:46.028892    8368 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:44:46.173537    8368 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1227 20:44:46.176536    8368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:44:46.625093    8368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-27 20:44:46.598321745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:44:46.630087    8368 out.go:179] * Using the docker driver based on user configuration
	I1227 20:44:46.635085    8368 start.go:309] selected driver: docker
	I1227 20:44:46.635085    8368 start.go:928] validating driver "docker" against <nil>
	I1227 20:44:46.635085    8368 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:44:46.643095    8368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:44:47.082612    8368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-27 20:44:47.065017899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:44:47.082612    8368 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:44:47.083612    8368 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:44:47.586928    8368 out.go:179] * Using Docker Desktop driver with root privileges
	I1227 20:44:47.627883    8368 cni.go:84] Creating CNI manager for ""
	I1227 20:44:47.627883    8368 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 20:44:47.627974    8368 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 20:44:47.628211    8368 start.go:353] cluster config:
	{Name:force-systemd-flag-637800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:44:47.647641    8368 out.go:179] * Starting "force-systemd-flag-637800" primary control-plane node in "force-systemd-flag-637800" cluster
	I1227 20:44:47.689298    8368 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 20:44:47.746850    8368 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:44:47.786355    8368 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:44:47.786682    8368 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 20:44:47.786825    8368 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 20:44:47.786825    8368 cache.go:65] Caching tarball of preloaded images
	I1227 20:44:47.786825    8368 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 20:44:47.786825    8368 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 20:44:47.787410    8368 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\config.json ...
	I1227 20:44:47.787410    8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\config.json: {Name:mk5ebd4e14f5837357f270b7883f6c7cd5b53f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:44:47.862081    8368 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:44:47.862081    8368 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:44:47.862081    8368 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:44:47.862081    8368 start.go:360] acquireMachinesLock for force-systemd-flag-637800: {Name:mk4fea70227937b59b139a887f8b0cb3d2cd6442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:44:47.863093    8368 start.go:364] duration metric: took 0s to acquireMachinesLock for "force-systemd-flag-637800"
	I1227 20:44:47.863093    8368 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-637800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 20:44:47.863093    8368 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:44:47.880072    8368 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:44:47.880072    8368 start.go:159] libmachine.API.Create for "force-systemd-flag-637800" (driver="docker")
	I1227 20:44:47.880072    8368 client.go:173] LocalClient.Create starting
	I1227 20:44:47.881099    8368 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1227 20:44:47.881099    8368 main.go:144] libmachine: Decoding PEM data...
	I1227 20:44:47.881099    8368 main.go:144] libmachine: Parsing certificate...
	I1227 20:44:47.881099    8368 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1227 20:44:47.881099    8368 main.go:144] libmachine: Decoding PEM data...
	I1227 20:44:47.881099    8368 main.go:144] libmachine: Parsing certificate...
	I1227 20:44:47.887066    8368 cli_runner.go:164] Run: docker network inspect force-systemd-flag-637800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:44:47.940066    8368 cli_runner.go:211] docker network inspect force-systemd-flag-637800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:44:47.944063    8368 network_create.go:284] running [docker network inspect force-systemd-flag-637800] to gather additional debugging logs...
	I1227 20:44:47.944063    8368 cli_runner.go:164] Run: docker network inspect force-systemd-flag-637800
	W1227 20:44:47.998085    8368 cli_runner.go:211] docker network inspect force-systemd-flag-637800 returned with exit code 1
	I1227 20:44:47.998085    8368 network_create.go:287] error running [docker network inspect force-systemd-flag-637800]: docker network inspect force-systemd-flag-637800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-637800 not found
	I1227 20:44:47.998085    8368 network_create.go:289] output of [docker network inspect force-systemd-flag-637800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-637800 not found
	
	** /stderr **
	I1227 20:44:48.003078    8368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:44:48.072066    8368 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:44:48.103085    8368 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:44:48.135070    8368 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:44:48.167070    8368 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:44:48.185069    8368 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00173c930}
	I1227 20:44:48.185069    8368 network_create.go:124] attempt to create docker network force-systemd-flag-637800 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 20:44:48.189082    8368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-637800 force-systemd-flag-637800
	I1227 20:44:48.404084    8368 network_create.go:108] docker network force-systemd-flag-637800 192.168.85.0/24 created
	I1227 20:44:48.404084    8368 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-637800" container
	I1227 20:44:48.412080    8368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:44:48.465075    8368 cli_runner.go:164] Run: docker volume create force-systemd-flag-637800 --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:44:48.520075    8368 oci.go:103] Successfully created a docker volume force-systemd-flag-637800
	I1227 20:44:48.525073    8368 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-637800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --entrypoint /usr/bin/test -v force-systemd-flag-637800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:44:50.649852    8368 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-637800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --entrypoint /usr/bin/test -v force-systemd-flag-637800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (2.124756s)
	I1227 20:44:50.649852    8368 oci.go:107] Successfully prepared a docker volume force-systemd-flag-637800
	I1227 20:44:50.649852    8368 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 20:44:50.649852    8368 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:44:50.653860    8368 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-637800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:45:37.448145    8368 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-637800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (46.7927616s)
	I1227 20:45:37.449148    8368 kic.go:203] duration metric: took 46.7987712s to extract preloaded images to volume ...
	I1227 20:45:37.455146    8368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:45:37.857805    8368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:89 SystemTime:2025-12-27 20:45:37.837600918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:45:37.861810    8368 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:45:38.272921    8368 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-637800 --name force-systemd-flag-637800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-637800 --network force-systemd-flag-637800 --ip 192.168.85.2 --volume force-systemd-flag-637800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:45:39.981024    8368 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-637800 --name force-systemd-flag-637800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-637800 --network force-systemd-flag-637800 --ip 192.168.85.2 --volume force-systemd-flag-637800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a: (1.7080833s)
	I1227 20:45:39.987019    8368 cli_runner.go:164] Run: docker container inspect force-systemd-flag-637800 --format={{.State.Running}}
	I1227 20:45:40.069325    8368 cli_runner.go:164] Run: docker container inspect force-systemd-flag-637800 --format={{.State.Status}}
	I1227 20:45:40.146347    8368 cli_runner.go:164] Run: docker exec force-systemd-flag-637800 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:45:40.293379    8368 oci.go:144] the created container "force-systemd-flag-637800" has a running status.
	I1227 20:45:40.293379    8368 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa...
	I1227 20:45:40.323338    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 20:45:40.336351    8368 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:45:40.448317    8368 cli_runner.go:164] Run: docker container inspect force-systemd-flag-637800 --format={{.State.Status}}
	I1227 20:45:40.528305    8368 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:45:40.528305    8368 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-637800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:45:40.680318    8368 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa...
	I1227 20:45:43.749971    8368 cli_runner.go:164] Run: docker container inspect force-systemd-flag-637800 --format={{.State.Status}}
	I1227 20:45:43.816978    8368 machine.go:94] provisionDockerMachine start ...
	I1227 20:45:43.822980    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:43.896972    8368 main.go:144] libmachine: Using SSH client type: native
	I1227 20:45:43.910971    8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 59660 <nil> <nil>}
	I1227 20:45:43.910971    8368 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:45:44.091696    8368 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-637800
	
	I1227 20:45:44.091742    8368 ubuntu.go:182] provisioning hostname "force-systemd-flag-637800"
	I1227 20:45:44.097956    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:44.158858    8368 main.go:144] libmachine: Using SSH client type: native
	I1227 20:45:44.158858    8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 59660 <nil> <nil>}
	I1227 20:45:44.158858    8368 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-637800 && echo "force-systemd-flag-637800" | sudo tee /etc/hostname
	I1227 20:45:44.366740    8368 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-637800
	
	I1227 20:45:44.370730    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:44.425744    8368 main.go:144] libmachine: Using SSH client type: native
	I1227 20:45:44.426743    8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 59660 <nil> <nil>}
	I1227 20:45:44.426743    8368 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-637800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-637800/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-637800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:45:44.601883    8368 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:45:44.601883    8368 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1227 20:45:44.601883    8368 ubuntu.go:190] setting up certificates
	I1227 20:45:44.601883    8368 provision.go:84] configureAuth start
	I1227 20:45:44.606878    8368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-637800
	I1227 20:45:44.671293    8368 provision.go:143] copyHostCerts
	I1227 20:45:44.671383    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1227 20:45:44.671383    8368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1227 20:45:44.671383    8368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1227 20:45:44.671927    8368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1227 20:45:44.672872    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1227 20:45:44.672872    8368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1227 20:45:44.672872    8368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1227 20:45:44.672872    8368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1227 20:45:44.673715    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1227 20:45:44.673715    8368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1227 20:45:44.673715    8368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1227 20:45:44.674600    8368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1227 20:45:44.675643    8368 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-637800 san=[127.0.0.1 192.168.85.2 force-systemd-flag-637800 localhost minikube]
	I1227 20:45:44.927042    8368 provision.go:177] copyRemoteCerts
	I1227 20:45:44.932043    8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:45:44.936043    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:44.989056    8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
	I1227 20:45:45.122632    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1227 20:45:45.122632    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 20:45:45.157375    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1227 20:45:45.157375    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I1227 20:45:45.199605    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1227 20:45:45.199605    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:45:45.227510    8368 provision.go:87] duration metric: took 625.6196ms to configureAuth
	I1227 20:45:45.227510    8368 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:45:45.228505    8368 config.go:182] Loaded profile config "force-systemd-flag-637800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:45:45.232509    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:45.289516    8368 main.go:144] libmachine: Using SSH client type: native
	I1227 20:45:45.289516    8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 59660 <nil> <nil>}
	I1227 20:45:45.289516    8368 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 20:45:45.465784    8368 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 20:45:45.465845    8368 ubuntu.go:71] root file system type: overlay
	I1227 20:45:45.466000    8368 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 20:45:45.469796    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:45.523648    8368 main.go:144] libmachine: Using SSH client type: native
	I1227 20:45:45.523648    8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 59660 <nil> <nil>}
	I1227 20:45:45.523648    8368 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 20:45:45.694975    8368 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 20:45:45.698935    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:45.757922    8368 main.go:144] libmachine: Using SSH client type: native
	I1227 20:45:45.758282    8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 59660 <nil> <nil>}
	I1227 20:45:45.758282    8368 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 20:45:48.338984    8368 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 20:45:45.681571544 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 20:45:48.339074    8368 machine.go:97] duration metric: took 4.5220458s to provisionDockerMachine
	I1227 20:45:48.339074    8368 client.go:176] duration metric: took 1m0.4583257s to LocalClient.Create
	I1227 20:45:48.339171    8368 start.go:167] duration metric: took 1m0.4584219s to libmachine.API.Create "force-systemd-flag-637800"
	I1227 20:45:48.339231    8368 start.go:293] postStartSetup for "force-systemd-flag-637800" (driver="docker")
	I1227 20:45:48.339286    8368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:45:48.347419    8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:45:48.354032    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:48.407854    8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
	I1227 20:45:48.524848    8368 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:45:48.531860    8368 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:45:48.531860    8368 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:45:48.531860    8368 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1227 20:45:48.531860    8368 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1227 20:45:48.532863    8368 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> 136562.pem in /etc/ssl/certs
	I1227 20:45:48.532863    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> /etc/ssl/certs/136562.pem
	I1227 20:45:48.539861    8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:45:48.551857    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem --> /etc/ssl/certs/136562.pem (1708 bytes)
	I1227 20:45:48.585854    8368 start.go:296] duration metric: took 246.62ms for postStartSetup
	I1227 20:45:48.591856    8368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-637800
	I1227 20:45:48.646856    8368 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\config.json ...
	I1227 20:45:48.654852    8368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:45:48.660853    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:48.716856    8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
	I1227 20:45:48.851956    8368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:45:48.866124    8368 start.go:128] duration metric: took 1m1.0023474s to createHost
	I1227 20:45:48.866124    8368 start.go:83] releasing machines lock for "force-systemd-flag-637800", held for 1m1.0023474s
	I1227 20:45:48.869695    8368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-637800
	I1227 20:45:48.920690    8368 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1227 20:45:48.924704    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:48.924704    8368 ssh_runner.go:195] Run: cat /version.json
	I1227 20:45:48.927693    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:48.983710    8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
	I1227 20:45:48.984703    8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
	W1227 20:45:49.091146    8368 start.go:879] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1227 20:45:49.097074    8368 ssh_runner.go:195] Run: systemctl --version
	I1227 20:45:49.114049    8368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:45:49.123063    8368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:45:49.128052    8368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:45:49.181056    8368 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 20:45:49.181056    8368 start.go:496] detecting cgroup driver to use...
	I1227 20:45:49.181056    8368 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:45:49.181056    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:45:49.209058    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1227 20:45:49.224065    8368 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1227 20:45:49.224065    8368 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1227 20:45:49.230057    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 20:45:49.244061    8368 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 20:45:49.248049    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 20:45:49.267051    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:45:49.288058    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 20:45:49.307052    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:45:49.324071    8368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:45:49.343053    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 20:45:49.361051    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 20:45:49.381053    8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 20:45:49.399067    8368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:45:49.416074    8368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:45:49.433051    8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:45:49.596936    8368 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 20:45:49.788307    8368 start.go:496] detecting cgroup driver to use...
	I1227 20:45:49.788307    8368 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:45:49.795301    8368 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 20:45:49.824310    8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:45:49.846295    8368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:45:49.947797    8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:45:49.974075    8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 20:45:49.993081    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:45:50.019086    8368 ssh_runner.go:195] Run: which cri-dockerd
	I1227 20:45:50.031092    8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 20:45:50.044078    8368 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 20:45:50.070069    8368 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 20:45:50.238421    8368 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 20:45:50.404739    8368 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 20:45:50.404739    8368 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 20:45:50.527688    8368 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 20:45:50.556809    8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:45:50.763676    8368 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 20:45:51.820290    8368 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0566026s)
	I1227 20:45:51.823616    8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:45:51.846852    8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 20:45:51.869195    8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 20:45:51.892137    8368 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 20:45:52.040646    8368 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 20:45:52.180010    8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:45:52.331942    8368 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 20:45:52.357934    8368 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 20:45:52.379935    8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:45:52.488834    8368 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 20:45:52.606921    8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 20:45:52.626419    8368 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 20:45:52.630413    8368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 20:45:52.637418    8368 start.go:574] Will wait 60s for crictl version
	I1227 20:45:52.640405    8368 ssh_runner.go:195] Run: which crictl
	I1227 20:45:52.652423    8368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:45:52.699028    8368 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 20:45:52.702039    8368 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 20:45:52.755278    8368 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 20:45:52.813153    8368 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 20:45:52.817748    8368 cli_runner.go:164] Run: docker exec -t force-systemd-flag-637800 dig +short host.docker.internal
	I1227 20:45:52.962809    8368 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1227 20:45:52.966825    8368 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1227 20:45:52.973818    8368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:45:52.994806    8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-flag-637800
	I1227 20:45:53.047818    8368 kubeadm.go:884] updating cluster {Name:force-systemd-flag-637800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:45:53.047818    8368 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 20:45:53.051814    8368 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 20:45:53.085490    8368 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 20:45:53.085536    8368 docker.go:624] Images already preloaded, skipping extraction
	I1227 20:45:53.089980    8368 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 20:45:53.126005    8368 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 20:45:53.126005    8368 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:45:53.126005    8368 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1227 20:45:53.126005    8368 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-637800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:45:53.129982    8368 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 20:45:53.204979    8368 cni.go:84] Creating CNI manager for ""
	I1227 20:45:53.204979    8368 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 20:45:53.204979    8368 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:45:53.204979    8368 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-637800 NodeName:force-systemd-flag-637800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:45:53.205979    8368 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-637800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:45:53.209989    8368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:45:53.222979    8368 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:45:53.226975    8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:45:53.240986    8368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1227 20:45:53.261114    8368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:45:53.285444    8368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1227 20:45:53.312181    8368 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:45:53.319745    8368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:45:53.338745    8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:45:53.490194    8368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:45:53.512335    8368 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800 for IP: 192.168.85.2
	I1227 20:45:53.512335    8368 certs.go:195] generating shared ca certs ...
	I1227 20:45:53.512335    8368 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:45:53.513171    8368 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1227 20:45:53.513171    8368 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1227 20:45:53.513171    8368 certs.go:257] generating profile certs ...
	I1227 20:45:53.514122    8368 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.key
	I1227 20:45:53.514420    8368 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.crt with IP's: []
	I1227 20:45:53.583995    8368 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.crt ...
	I1227 20:45:53.583995    8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.crt: {Name:mk5ae8d8bc510c098f8c076201617d960d137d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:45:53.585093    8368 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.key ...
	I1227 20:45:53.586006    8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.key: {Name:mkb67d769000f92c5919e771c804ef4e1ae7c469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:45:53.587013    8368 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key.81dfed14
	I1227 20:45:53.587013    8368 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt.81dfed14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 20:45:53.713877    8368 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt.81dfed14 ...
	I1227 20:45:53.713877    8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt.81dfed14: {Name:mkde3590f25df3075aba03614a6d757a7100a23d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:45:53.714882    8368 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key.81dfed14 ...
	I1227 20:45:53.714882    8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key.81dfed14: {Name:mk70c9faccd859488cfdafb43a0e0791895e7add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:45:53.715882    8368 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt.81dfed14 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt
	I1227 20:45:53.732888    8368 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key.81dfed14 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key
	I1227 20:45:53.733887    8368 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key
	I1227 20:45:53.733887    8368 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt with IP's: []
	I1227 20:45:53.851820    8368 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt ...
	I1227 20:45:53.851820    8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt: {Name:mk00719c9a9b789569dd3aa2fef5e42c4e1ded43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:45:53.852821    8368 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key ...
	I1227 20:45:53.852821    8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key: {Name:mk246c37ea059f9862608a5865bbacf9edb46773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:45:53.852821    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:45:53.852821    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:45:53.852821    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:45:53.853846    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:45:53.853846    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:45:53.853846    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:45:53.853846    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:45:53.864361    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:45:53.864515    8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem (1338 bytes)
	W1227 20:45:53.865174    8368 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656_empty.pem, impossibly tiny 0 bytes
	I1227 20:45:53.865174    8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1227 20:45:53.865799    8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1227 20:45:53.865941    8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1227 20:45:53.866198    8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1227 20:45:53.866424    8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem (1708 bytes)
	I1227 20:45:53.866954    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:45:53.867001    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem -> /usr/share/ca-certificates/13656.pem
	I1227 20:45:53.867132    8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> /usr/share/ca-certificates/136562.pem
	I1227 20:45:53.867811    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:45:53.900367    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 20:45:53.929531    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:45:53.960431    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:45:53.986425    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 20:45:54.019454    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:45:54.047451    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:45:54.077296    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:45:54.106294    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:45:54.134287    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem --> /usr/share/ca-certificates/13656.pem (1338 bytes)
	I1227 20:45:54.163289    8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem --> /usr/share/ca-certificates/136562.pem (1708 bytes)
	I1227 20:45:54.190284    8368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:45:54.213298    8368 ssh_runner.go:195] Run: openssl version
	I1227 20:45:54.227284    8368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:45:54.243304    8368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:45:54.259288    8368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:45:54.266290    8368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:45:54.271291    8368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:45:54.320866    8368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:45:54.336869    8368 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:45:54.355325    8368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13656.pem
	I1227 20:45:54.371330    8368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13656.pem /etc/ssl/certs/13656.pem
	I1227 20:45:54.389308    8368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13656.pem
	I1227 20:45:54.396309    8368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:04 /usr/share/ca-certificates/13656.pem
	I1227 20:45:54.402308    8368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13656.pem
	I1227 20:45:54.451159    8368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:45:54.470157    8368 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13656.pem /etc/ssl/certs/51391683.0
	I1227 20:45:54.485154    8368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/136562.pem
	I1227 20:45:54.502156    8368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/136562.pem /etc/ssl/certs/136562.pem
	I1227 20:45:54.519175    8368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136562.pem
	I1227 20:45:54.526158    8368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:04 /usr/share/ca-certificates/136562.pem
	I1227 20:45:54.530148    8368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136562.pem
	I1227 20:45:54.593161    8368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:45:54.615923    8368 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/136562.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:45:54.636440    8368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:45:54.644429    8368 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:45:54.644429    8368 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-637800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:45:54.648423    8368 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 20:45:54.684444    8368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:45:54.705431    8368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:45:54.717422    8368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:45:54.722430    8368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:45:54.735425    8368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:45:54.735425    8368 kubeadm.go:158] found existing configuration files:
	
	I1227 20:45:54.739435    8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:45:54.752526    8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:45:54.760571    8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:45:54.778752    8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:45:54.795328    8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:45:54.801434    8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:45:54.822117    8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:45:54.835111    8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:45:54.839110    8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:45:54.855116    8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:45:54.867110    8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:45:54.871110    8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:45:54.888111    8368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:45:55.005940    8368 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1227 20:45:55.102889    8368 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:45:55.230001    8368 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:49:57.186046    8368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 20:49:57.186185    8368 kubeadm.go:319] 
	I1227 20:49:57.186185    8368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:49:57.190844    8368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:49:57.190844    8368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:49:57.191375    8368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:49:57.191633    8368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1227 20:49:57.191790    8368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1227 20:49:57.191894    8368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1227 20:49:57.191894    8368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1227 20:49:57.191894    8368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1227 20:49:57.191894    8368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1227 20:49:57.191894    8368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1227 20:49:57.192426    8368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1227 20:49:57.192530    8368 kubeadm.go:319] CONFIG_INET: enabled
	I1227 20:49:57.192628    8368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1227 20:49:57.192763    8368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1227 20:49:57.192918    8368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1227 20:49:57.193096    8368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1227 20:49:57.193214    8368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1227 20:49:57.193319    8368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1227 20:49:57.193429    8368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1227 20:49:57.193543    8368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1227 20:49:57.193695    8368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1227 20:49:57.193788    8368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1227 20:49:57.193868    8368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1227 20:49:57.193940    8368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1227 20:49:57.194066    8368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1227 20:49:57.194164    8368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1227 20:49:57.194229    8368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1227 20:49:57.194229    8368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1227 20:49:57.194229    8368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1227 20:49:57.194229    8368 kubeadm.go:319] OS: Linux
	I1227 20:49:57.194229    8368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:49:57.194817    8368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:49:57.194879    8368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:49:57.194879    8368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:49:57.194879    8368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:49:57.194879    8368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:49:57.194879    8368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:49:57.194879    8368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:49:57.195473    8368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:49:57.195473    8368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:49:57.195473    8368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:49:57.195473    8368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:49:57.196058    8368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:49:57.200588    8368 out.go:252]   - Generating certificates and keys ...
	I1227 20:49:57.200588    8368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:49:57.200588    8368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:49:57.201150    8368 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:49:57.201256    8368 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:49:57.201256    8368 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:49:57.201256    8368 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:49:57.201256    8368 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:49:57.201821    8368 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:49:57.201937    8368 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:49:57.202046    8368 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:49:57.202046    8368 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:49:57.202046    8368 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:49:57.202633    8368 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:49:57.202633    8368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:49:57.202633    8368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:49:57.202633    8368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:49:57.202633    8368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:49:57.202633    8368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:49:57.203207    8368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:49:57.203361    8368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:49:57.203361    8368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:49:57.205132    8368 out.go:252]   - Booting up control plane ...
	I1227 20:49:57.205680    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:49:57.205680    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:49:57.205680    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:49:57.205680    8368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:49:57.205680    8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:49:57.205680    8368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:49:57.206681    8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:49:57.206681    8368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:49:57.206681    8368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:49:57.206681    8368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:49:57.207414    8368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000173718s
	I1227 20:49:57.207414    8368 kubeadm.go:319] 
	I1227 20:49:57.207414    8368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:49:57.207414    8368 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:49:57.207414    8368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:49:57.207414    8368 kubeadm.go:319] 
	I1227 20:49:57.207414    8368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:49:57.207414    8368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:49:57.208092    8368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:49:57.208183    8368 kubeadm.go:319] 
	W1227 20:49:57.208311    8368 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000173718s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000173718s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 20:49:57.211515    8368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1227 20:49:57.665139    8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:49:57.687065    8368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:49:57.691329    8368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:49:57.704375    8368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:49:57.704375    8368 kubeadm.go:158] found existing configuration files:
	
	I1227 20:49:57.708511    8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:49:57.724433    8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:49:57.730256    8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:49:57.748506    8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:49:57.763257    8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:49:57.767089    8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:49:57.784672    8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:49:57.798565    8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:49:57.803841    8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:49:57.820275    8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:49:57.833415    8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:49:57.837021    8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:49:57.854541    8368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:49:57.978335    8368 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1227 20:49:58.063508    8368 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:49:58.163766    8368 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:53:59.060878    8368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:53:59.060994    8368 kubeadm.go:319] 
	I1227 20:53:59.061027    8368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:53:59.065962    8368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:53:59.065962    8368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:53:59.066638    8368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:53:59.066828    8368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1227 20:53:59.066998    8368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1227 20:53:59.067110    8368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1227 20:53:59.067254    8368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1227 20:53:59.067424    8368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1227 20:53:59.067538    8368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1227 20:53:59.067776    8368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1227 20:53:59.067885    8368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1227 20:53:59.067999    8368 kubeadm.go:319] CONFIG_INET: enabled
	I1227 20:53:59.068195    8368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1227 20:53:59.068359    8368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1227 20:53:59.068588    8368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1227 20:53:59.068774    8368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1227 20:53:59.068950    8368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1227 20:53:59.069079    8368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1227 20:53:59.069272    8368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1227 20:53:59.069840    8368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1227 20:53:59.070006    8368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1227 20:53:59.070175    8368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1227 20:53:59.070302    8368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1227 20:53:59.070375    8368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1227 20:53:59.070375    8368 kubeadm.go:319] OS: Linux
	I1227 20:53:59.070375    8368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:53:59.070375    8368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:53:59.070911    8368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:53:59.071095    8368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:53:59.071295    8368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:53:59.071473    8368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:53:59.071624    8368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:53:59.071846    8368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:53:59.071990    8368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:53:59.072054    8368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:53:59.072054    8368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:53:59.072592    8368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:53:59.072797    8368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:53:59.074527    8368 out.go:252]   - Generating certificates and keys ...
	I1227 20:53:59.074527    8368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:53:59.075176    8368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 20:53:59.076121    8368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 20:53:59.076121    8368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 20:53:59.076121    8368 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:53:59.077110    8368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:53:59.077110    8368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:53:59.077110    8368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:53:59.080109    8368 out.go:252]   - Booting up control plane ...
	I1227 20:53:59.080109    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:53:59.080109    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:53:59.080109    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:53:59.082108    8368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:53:59.082108    8368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:53:59.082108    8368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001080211s
	I1227 20:53:59.082108    8368 kubeadm.go:319] 
	I1227 20:53:59.082108    8368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:53:59.082108    8368 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:53:59.082108    8368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:53:59.082108    8368 kubeadm.go:319] 
	I1227 20:53:59.083110    8368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:53:59.083110    8368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:53:59.083110    8368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:53:59.083110    8368 kubeadm.go:319] 
	I1227 20:53:59.083110    8368 kubeadm.go:403] duration metric: took 8m4.4331849s to StartCluster
	I1227 20:53:59.083110    8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:53:59.086714    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:53:59.167941    8368 cri.go:96] found id: ""
	I1227 20:53:59.167941    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.167941    8368 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:53:59.167941    8368 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:53:59.171939    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:53:59.218478    8368 cri.go:96] found id: ""
	I1227 20:53:59.218478    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.218478    8368 logs.go:284] No container was found matching "etcd"
	I1227 20:53:59.218478    8368 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:53:59.226822    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:53:59.284237    8368 cri.go:96] found id: ""
	I1227 20:53:59.284237    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.284237    8368 logs.go:284] No container was found matching "coredns"
	I1227 20:53:59.284237    8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:53:59.288231    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:53:59.377891    8368 cri.go:96] found id: ""
	I1227 20:53:59.377891    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.377891    8368 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:53:59.377891    8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:53:59.382906    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:53:59.440858    8368 cri.go:96] found id: ""
	I1227 20:53:59.440858    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.440858    8368 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:53:59.440858    8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:53:59.444864    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:53:59.494387    8368 cri.go:96] found id: ""
	I1227 20:53:59.494387    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.494387    8368 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:53:59.494387    8368 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:53:59.499982    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:53:59.549234    8368 cri.go:96] found id: ""
	I1227 20:53:59.549234    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.549234    8368 logs.go:284] No container was found matching "kindnet"
	I1227 20:53:59.549234    8368 logs.go:123] Gathering logs for kubelet ...
	I1227 20:53:59.549234    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:53:59.622162    8368 logs.go:123] Gathering logs for dmesg ...
	I1227 20:53:59.622162    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:53:59.659345    8368 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:53:59.659345    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:53:59.739538    8368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:53:59.729848   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.730979   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.731673   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.734528   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.735388   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:53:59.729848   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.730979   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.731673   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.734528   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.735388   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:53:59.739538    8368 logs.go:123] Gathering logs for Docker ...
	I1227 20:53:59.739538    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 20:53:59.773539    8368 logs.go:123] Gathering logs for container status ...
	I1227 20:53:59.773539    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 20:53:59.825106    8368 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080211s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:53:59.825106    8368 out.go:285] * 
	* 
	W1227 20:53:59.825106    8368 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080211s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080211s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:53:59.826106    8368 out.go:285] * 
	* 
	W1227 20:53:59.826106    8368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:53:59.831107    8368 out.go:203] 
	W1227 20:53:59.835110    8368 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080211s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080211s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:53:59.835110    8368 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:53:59.835110    8368 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:53:59.838109    8368 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-637800 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-637800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 20:54:00.9046817 +0000 UTC m=+3513.036184801
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-637800
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-637800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0",
	        "Created": "2025-12-27T20:45:38.326397606Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 178530,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:45:39.310328144Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0/hostname",
	        "HostsPath": "/var/lib/docker/containers/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0/hosts",
	        "LogPath": "/var/lib/docker/containers/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0-json.log",
	        "Name": "/force-systemd-flag-637800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-637800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-637800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1af882482536703d2d0fb7d4938f688f34205856ddeb30211a91c0e05949f9b5-init/diff:/var/lib/docker/overlay2/cc9bc6a1bc34df01fcf2646a74af47280e16e85e4444f747f528eb17ae725d09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1af882482536703d2d0fb7d4938f688f34205856ddeb30211a91c0e05949f9b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1af882482536703d2d0fb7d4938f688f34205856ddeb30211a91c0e05949f9b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1af882482536703d2d0fb7d4938f688f34205856ddeb30211a91c0e05949f9b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-637800",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-637800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-637800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-637800",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-637800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2941a1a36c5e57306f300cc710f321bb5e77b6a482f1684c61dc7ccb3cd4a0cb",
	            "SandboxKey": "/var/run/docker/netns/2941a1a36c5e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59660"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59661"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59662"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59663"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59664"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-637800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9721f24ad9d812e3d6ab44ec6c3549073102d8e033685f5c65cbbafc6107d266",
	                    "EndpointID": "792ee673e615f61a629892909a7414c18bf31f940ece5099a725f7e98f79392a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-637800",
	                        "0daf14eba157"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-637800 -n force-systemd-flag-637800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-637800 -n force-systemd-flag-637800: exit status 6 (563.7809ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:54:01.497822    2428 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-637800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-637800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-637800 logs -n 25: (2.1029756s)
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-630300 sudo cri-dockerd --version                                                                                                                                                                                │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                  │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo systemctl cat containerd --no-pager                                                                                                                                                                  │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                           │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo cat /etc/containerd/config.toml                                                                                                                                                                      │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo containerd config dump                                                                                                                                                                               │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo systemctl status crio --all --full --no-pager                                                                                                                                                        │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo systemctl cat crio --no-pager                                                                                                                                                                        │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                              │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-630300 sudo crio config                                                                                                                                                                                          │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ delete  │ -p cilium-630300                                                                                                                                                                                                           │ cilium-630300             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p NoKubernetes-924000 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker                                                                                                                                        │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ start   │ -p NoKubernetes-924000 --memory=3072 --alsologtostderr -v=5 --driver=docker                                                                                                                                                │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p NoKubernetes-924000 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker                                                                                                                                │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ delete  │ -p NoKubernetes-924000                                                                                                                                                                                                     │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p NoKubernetes-924000 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker                                                                                                                                │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ ssh     │ -p NoKubernetes-924000 sudo systemctl is-active --quiet service kubelet                                                                                                                                                    │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ stop    │ -p NoKubernetes-924000                                                                                                                                                                                                     │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p NoKubernetes-924000 --driver=docker                                                                                                                                                                                     │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ delete  │ -p stopped-upgrade-172600                                                                                                                                                                                                  │ stopped-upgrade-172600    │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p cert-options-955700 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost │ cert-options-955700       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ ssh     │ -p NoKubernetes-924000 sudo systemctl is-active --quiet service kubelet                                                                                                                                                    │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ delete  │ -p NoKubernetes-924000                                                                                                                                                                                                     │ NoKubernetes-924000       │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p cert-expiration-978000 --memory=3072 --cert-expiration=3m --driver=docker                                                                                                                                               │ cert-expiration-978000    │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ ssh     │ force-systemd-flag-637800 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                       │ force-systemd-flag-637800 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:53:55
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:53:55.870774   13396 out.go:360] Setting OutFile to fd 1048 ...
	I1227 20:53:55.923649   13396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:53:55.923649   13396 out.go:374] Setting ErrFile to fd 908...
	I1227 20:53:55.923649   13396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:53:55.938226   13396 out.go:368] Setting JSON to false
	I1227 20:53:55.943220   13396 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4222,"bootTime":1766864613,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1227 20:53:55.943220   13396 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1227 20:53:55.947212   13396 out.go:179] * [cert-expiration-978000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1227 20:53:55.953208   13396 notify.go:221] Checking for updates...
	I1227 20:53:55.958549   13396 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 20:53:55.964592   13396 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:53:55.969592   13396 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1227 20:53:55.975586   13396 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:53:55.979582   13396 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:53:55.985592   13396 config.go:182] Loaded profile config "cert-options-955700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:53:55.985592   13396 config.go:182] Loaded profile config "force-systemd-flag-637800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:53:55.986588   13396 config.go:182] Loaded profile config "running-upgrade-127300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1227 20:53:55.986588   13396 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:53:56.119586   13396 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1227 20:53:56.123644   13396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:53:56.377199   13396 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:105 SystemTime:2025-12-27 20:53:56.351136497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:53:56.379209   13396 out.go:179] * Using the docker driver based on user configuration
	I1227 20:53:56.384219   13396 start.go:309] selected driver: docker
	I1227 20:53:56.384219   13396 start.go:928] validating driver "docker" against <nil>
	I1227 20:53:56.384219   13396 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:53:56.390199   13396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:53:56.650705   13396 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-27 20:53:56.631093248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:53:56.650705   13396 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:53:56.651720   13396 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:53:56.656707   13396 out.go:179] * Using Docker Desktop driver with root privileges
	I1227 20:53:56.658715   13396 cni.go:84] Creating CNI manager for ""
	I1227 20:53:56.658715   13396 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 20:53:56.658715   13396 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 20:53:56.658715   13396 start.go:353] cluster config:
	{Name:cert-expiration-978000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:53:56.660710   13396 out.go:179] * Starting "cert-expiration-978000" primary control-plane node in "cert-expiration-978000" cluster
	I1227 20:53:56.667715   13396 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 20:53:56.669714   13396 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:53:56.673722   13396 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 20:53:56.673722   13396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:53:56.673722   13396 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 20:53:56.673722   13396 cache.go:65] Caching tarball of preloaded images
	I1227 20:53:56.673722   13396 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 20:53:56.673722   13396 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 20:53:56.674716   13396 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-978000\config.json ...
	I1227 20:53:56.674716   13396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-978000\config.json: {Name:mkb7d1993c220c17da5cbef47edfd03ae6fead9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:56.750724   13396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:53:56.750724   13396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:53:56.751723   13396 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:53:56.751723   13396 start.go:360] acquireMachinesLock for cert-expiration-978000: {Name:mkf7c69e71f2771f2c30c98cbe8b45870562cec4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:53:56.751723   13396 start.go:364] duration metric: took 0s to acquireMachinesLock for "cert-expiration-978000"
	I1227 20:53:56.751723   13396 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-978000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-978000 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 20:53:56.751723   13396 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:53:55.240579   11092 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-955700:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (10.8094115s)
	I1227 20:53:55.240579   11092 kic.go:203] duration metric: took 10.8136027s to extract preloaded images to volume ...
	I1227 20:53:55.244576   11092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:53:55.477630   11092 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:92 SystemTime:2025-12-27 20:53:55.457364973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:53:55.482211   11092 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:53:55.725510   11092 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-955700 --name cert-options-955700 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-955700 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-955700 --network cert-options-955700 --ip 192.168.76.2 --volume cert-options-955700:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:53:56.537169   11092 cli_runner.go:164] Run: docker container inspect cert-options-955700 --format={{.State.Running}}
	I1227 20:53:56.605712   11092 cli_runner.go:164] Run: docker container inspect cert-options-955700 --format={{.State.Status}}
	I1227 20:53:56.663709   11092 cli_runner.go:164] Run: docker exec cert-options-955700 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:53:56.783166   11092 oci.go:144] the created container "cert-options-955700" has a running status.
	I1227 20:53:56.783166   11092 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-955700\id_rsa...
	I1227 20:53:59.060878    8368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:53:59.060994    8368 kubeadm.go:319] 
	I1227 20:53:59.061027    8368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:53:59.065962    8368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:53:59.065962    8368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:53:59.066638    8368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:53:59.066828    8368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1227 20:53:59.066998    8368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1227 20:53:59.067110    8368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1227 20:53:59.067254    8368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1227 20:53:59.067424    8368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1227 20:53:59.067538    8368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1227 20:53:59.067776    8368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1227 20:53:59.067885    8368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1227 20:53:59.067999    8368 kubeadm.go:319] CONFIG_INET: enabled
	I1227 20:53:59.068195    8368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1227 20:53:59.068359    8368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1227 20:53:59.068588    8368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1227 20:53:59.068774    8368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1227 20:53:59.068950    8368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1227 20:53:59.069079    8368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1227 20:53:59.069272    8368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1227 20:53:59.069313    8368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1227 20:53:59.069840    8368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1227 20:53:59.070006    8368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1227 20:53:59.070175    8368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1227 20:53:59.070302    8368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1227 20:53:59.070375    8368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1227 20:53:59.070375    8368 kubeadm.go:319] OS: Linux
	I1227 20:53:59.070375    8368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:53:59.070375    8368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:53:59.070911    8368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:53:59.071095    8368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:53:59.071295    8368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:53:59.071473    8368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:53:59.071624    8368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:53:59.071846    8368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:53:59.071990    8368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:53:59.072054    8368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:53:59.072054    8368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:53:59.072592    8368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:53:59.072797    8368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:53:59.074527    8368 out.go:252]   - Generating certificates and keys ...
	I1227 20:53:59.074527    8368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:53:59.075176    8368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 20:53:59.075210    8368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 20:53:59.076121    8368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 20:53:59.076121    8368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 20:53:59.076121    8368 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:53:59.076121    8368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:53:59.077110    8368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:53:59.077110    8368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:53:59.077110    8368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:53:59.080109    8368 out.go:252]   - Booting up control plane ...
	I1227 20:53:59.080109    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:53:59.080109    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:53:59.080109    8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:53:59.081109    8368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:53:59.082108    8368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:53:59.082108    8368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:53:59.082108    8368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001080211s
	I1227 20:53:59.082108    8368 kubeadm.go:319] 
	I1227 20:53:59.082108    8368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:53:59.082108    8368 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:53:59.082108    8368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:53:59.082108    8368 kubeadm.go:319] 
	I1227 20:53:59.083110    8368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:53:59.083110    8368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:53:59.083110    8368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:53:59.083110    8368 kubeadm.go:319] 
	I1227 20:53:59.083110    8368 kubeadm.go:403] duration metric: took 8m4.4331849s to StartCluster
	I1227 20:53:59.083110    8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:53:59.086714    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:53:59.167941    8368 cri.go:96] found id: ""
	I1227 20:53:59.167941    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.167941    8368 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:53:59.167941    8368 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:53:59.171939    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:53:59.218478    8368 cri.go:96] found id: ""
	I1227 20:53:59.218478    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.218478    8368 logs.go:284] No container was found matching "etcd"
	I1227 20:53:59.218478    8368 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:53:59.226822    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:53:59.284237    8368 cri.go:96] found id: ""
	I1227 20:53:59.284237    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.284237    8368 logs.go:284] No container was found matching "coredns"
	I1227 20:53:59.284237    8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:53:59.288231    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:53:59.377891    8368 cri.go:96] found id: ""
	I1227 20:53:59.377891    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.377891    8368 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:53:59.377891    8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:53:59.382906    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:53:59.440858    8368 cri.go:96] found id: ""
	I1227 20:53:59.440858    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.440858    8368 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:53:59.440858    8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:53:59.444864    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:53:59.494387    8368 cri.go:96] found id: ""
	I1227 20:53:59.494387    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.494387    8368 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:53:59.494387    8368 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:53:59.499982    8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:53:59.549234    8368 cri.go:96] found id: ""
	I1227 20:53:59.549234    8368 logs.go:282] 0 containers: []
	W1227 20:53:59.549234    8368 logs.go:284] No container was found matching "kindnet"
	I1227 20:53:59.549234    8368 logs.go:123] Gathering logs for kubelet ...
	I1227 20:53:59.549234    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:53:59.622162    8368 logs.go:123] Gathering logs for dmesg ...
	I1227 20:53:59.622162    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:53:59.659345    8368 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:53:59.659345    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:53:59.739538    8368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:53:59.729848   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.730979   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.731673   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.734528   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.735388   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:53:59.729848   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.730979   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.731673   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.734528   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:53:59.735388   10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:53:59.739538    8368 logs.go:123] Gathering logs for Docker ...
	I1227 20:53:59.739538    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 20:53:59.773539    8368 logs.go:123] Gathering logs for container status ...
	I1227 20:53:59.773539    8368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 20:53:59.825106    8368 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080211s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:53:59.825106    8368 out.go:285] * 
	W1227 20:53:59.825106    8368 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080211s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:53:59.826106    8368 out.go:285] * 
	W1227 20:53:59.826106    8368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:53:59.831107    8368 out.go:203] 
	W1227 20:53:59.835110    8368 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080211s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:53:59.835110    8368 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:53:59.835110    8368 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:53:59.838109    8368 out.go:203] 
	I1227 20:53:57.431785   13384 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.7108963s)
	I1227 20:53:57.436790   13384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:53:57.465806   13384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:53:57.483772   13384 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:53:57.487780   13384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:53:57.502786   13384 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:53:57.502786   13384 kubeadm.go:158] found existing configuration files:
	
	I1227 20:53:57.508785   13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:53:57.524777   13384 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:53:57.529788   13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:53:57.552791   13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:53:57.566779   13384 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:53:57.570784   13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:53:57.596900   13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:53:57.611895   13384 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:53:57.614903   13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:53:57.637059   13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:53:57.651066   13384 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:53:57.655065   13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:53:57.672063   13384 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:53:57.753671   13384 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1227 20:53:57.763547   13384 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1227 20:53:57.882353   13384 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:53:56.760720   13396 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:53:56.760720   13396 start.go:159] libmachine.API.Create for "cert-expiration-978000" (driver="docker")
	I1227 20:53:56.760720   13396 client.go:173] LocalClient.Create starting
	I1227 20:53:56.760720   13396 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1227 20:53:56.760720   13396 main.go:144] libmachine: Decoding PEM data...
	I1227 20:53:56.760720   13396 main.go:144] libmachine: Parsing certificate...
	I1227 20:53:56.761726   13396 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1227 20:53:56.761726   13396 main.go:144] libmachine: Decoding PEM data...
	I1227 20:53:56.761726   13396 main.go:144] libmachine: Parsing certificate...
	I1227 20:53:56.768104   13396 cli_runner.go:164] Run: docker network inspect cert-expiration-978000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:53:56.822175   13396 cli_runner.go:211] docker network inspect cert-expiration-978000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:53:56.826158   13396 network_create.go:284] running [docker network inspect cert-expiration-978000] to gather additional debugging logs...
	I1227 20:53:56.826158   13396 cli_runner.go:164] Run: docker network inspect cert-expiration-978000
	W1227 20:53:56.876161   13396 cli_runner.go:211] docker network inspect cert-expiration-978000 returned with exit code 1
	I1227 20:53:56.876161   13396 network_create.go:287] error running [docker network inspect cert-expiration-978000]: docker network inspect cert-expiration-978000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-978000 not found
	I1227 20:53:56.876161   13396 network_create.go:289] output of [docker network inspect cert-expiration-978000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-978000 not found
	
	** /stderr **
	I1227 20:53:56.880170   13396 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:53:56.961330   13396 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:53:56.991940   13396 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:53:57.023945   13396 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:53:57.055537   13396 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:53:57.087241   13396 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:53:57.102500   13396 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e9dd0}
	I1227 20:53:57.102500   13396 network_create.go:124] attempt to create docker network cert-expiration-978000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1227 20:53:57.106790   13396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-978000 cert-expiration-978000
	W1227 20:53:57.164189   13396 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-978000 cert-expiration-978000 returned with exit code 1
	W1227 20:53:57.165183   13396 network_create.go:149] failed to create docker network cert-expiration-978000 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-978000 cert-expiration-978000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1227 20:53:57.165183   13396 network_create.go:116] failed to create docker network cert-expiration-978000 192.168.94.0/24, will retry: subnet is taken
	I1227 20:53:57.196468   13396 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:53:57.215696   13396 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001943bc0}
	I1227 20:53:57.215696   13396 network_create.go:124] attempt to create docker network cert-expiration-978000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1227 20:53:57.220707   13396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-978000 cert-expiration-978000
	I1227 20:53:57.385139   13396 network_create.go:108] docker network cert-expiration-978000 192.168.103.0/24 created
	I1227 20:53:57.385139   13396 kic.go:121] calculated static IP "192.168.103.2" for the "cert-expiration-978000" container
	I1227 20:53:57.395778   13396 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:53:57.462777   13396 cli_runner.go:164] Run: docker volume create cert-expiration-978000 --label name.minikube.sigs.k8s.io=cert-expiration-978000 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:53:57.533783   13396 oci.go:103] Successfully created a docker volume cert-expiration-978000
	I1227 20:53:57.538777   13396 cli_runner.go:164] Run: docker run --rm --name cert-expiration-978000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-978000 --entrypoint /usr/bin/test -v cert-expiration-978000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:53:58.816595   13396 cli_runner.go:217] Completed: docker run --rm --name cert-expiration-978000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-978000 --entrypoint /usr/bin/test -v cert-expiration-978000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.2778028s)
	I1227 20:53:58.816595   13396 oci.go:107] Successfully prepared a docker volume cert-expiration-978000
	I1227 20:53:58.816595   13396 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 20:53:58.816595   13396 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:53:58.820589   13396 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-978000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:53:57.154179   11092 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-955700\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:53:57.237685   11092 cli_runner.go:164] Run: docker container inspect cert-options-955700 --format={{.State.Status}}
	I1227 20:53:57.290686   11092 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:53:57.290686   11092 kic_runner.go:114] Args: [docker exec --privileged cert-options-955700 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:53:57.409803   11092 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-955700\id_rsa...
	I1227 20:53:59.731541   11092 cli_runner.go:164] Run: docker container inspect cert-options-955700 --format={{.State.Status}}
	I1227 20:53:59.780538   11092 machine.go:94] provisionDockerMachine start ...
	I1227 20:53:59.784612   11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
	I1227 20:53:59.840102   11092 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:59.854107   11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60668 <nil> <nil>}
	I1227 20:53:59.854107   11092 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:54:00.023411   11092 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-options-955700
	
	I1227 20:54:00.023411   11092 ubuntu.go:182] provisioning hostname "cert-options-955700"
	I1227 20:54:00.029391   11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
	I1227 20:54:00.091411   11092 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:00.091411   11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60668 <nil> <nil>}
	I1227 20:54:00.091411   11092 main.go:144] libmachine: About to run SSH command:
	sudo hostname cert-options-955700 && echo "cert-options-955700" | sudo tee /etc/hostname
	I1227 20:54:00.279737   11092 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-options-955700
	
	I1227 20:54:00.284748   11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
	I1227 20:54:00.344761   11092 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:00.345740   11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60668 <nil> <nil>}
	I1227 20:54:00.345740   11092 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-955700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-955700/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-955700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:54:00.512687   11092 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:54:00.512687   11092 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1227 20:54:00.512687   11092 ubuntu.go:190] setting up certificates
	I1227 20:54:00.512687   11092 provision.go:84] configureAuth start
	I1227 20:54:00.516698   11092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-955700
	I1227 20:54:00.569690   11092 provision.go:143] copyHostCerts
	I1227 20:54:00.569690   11092 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1227 20:54:00.569690   11092 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1227 20:54:00.569690   11092 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1227 20:54:00.570694   11092 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1227 20:54:00.570694   11092 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1227 20:54:00.570694   11092 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1227 20:54:00.571693   11092 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1227 20:54:00.571693   11092 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1227 20:54:00.571693   11092 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1227 20:54:00.572693   11092 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-options-955700 san=[127.0.0.1 192.168.76.2 cert-options-955700 localhost minikube]
	I1227 20:54:00.624609   11092 provision.go:177] copyRemoteCerts
	I1227 20:54:00.627601   11092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:54:00.630604   11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
	I1227 20:54:00.683603   11092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60668 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-955700\id_rsa Username:docker}
	I1227 20:54:00.820632   11092 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1227 20:54:00.851623   11092 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:54:00.880280   11092 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 20:54:00.911864   11092 provision.go:87] duration metric: took 399.1374ms to configureAuth
	I1227 20:54:00.911893   11092 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:54:00.911893   11092 config.go:182] Loaded profile config "cert-options-955700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:54:00.915549   11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
	I1227 20:54:00.974111   11092 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:00.974111   11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60668 <nil> <nil>}
	I1227 20:54:00.974111   11092 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 20:54:01.142159   11092 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 20:54:01.142159   11092 ubuntu.go:71] root file system type: overlay
	I1227 20:54:01.142159   11092 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 20:54:01.146157   11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
	I1227 20:54:01.195187   11092 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:01.196159   11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60668 <nil> <nil>}
	I1227 20:54:01.196159   11092 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 20:54:01.591866   11092 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 20:54:01.598481   11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
	I1227 20:54:01.666085   11092 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:01.666693   11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60668 <nil> <nil>}
	I1227 20:54:01.666693   11092 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	
	
	==> Docker <==
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698481581Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698573888Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698585389Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698590790Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698595990Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698618692Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698658595Z" level=info msg="Initializing buildkit"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.810183005Z" level=info msg="Completed buildkit initialization"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.815900682Z" level=info msg="Daemon has completed initialization"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.816136302Z" level=info msg="API listen on /run/docker.sock"
	Dec 27 20:45:51 force-systemd-flag-637800 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.816189906Z" level=info msg="API listen on [::]:2376"
	Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.816191306Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 27 20:45:52 force-systemd-flag-637800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Start docker client with request timeout 0s"
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Loaded network plugin cni"
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Setting cgroupDriver systemd"
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 27 20:45:52 force-systemd-flag-637800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:54:03.480384   10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:54:03.481367   10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:54:03.483796   10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:54:03.485469   10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:54:03.486293   10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000000] FS:  0000000000000000 GS:  0000000000000000
	[  +0.876144] CPU: 8 PID: 268388 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fbb60b22b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fbb60b22af6.
	[  +0.000001] RSP: 002b:00007fff28debba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +10.748039] CPU: 13 PID: 270153 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fbd88e39b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fbd88e39af6.
	[  +0.000001] RSP: 002b:00007ffe9e2a8310 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +3.087400] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 20:54:03 up  1:09,  0 user,  load average: 4.33, 3.98, 3.17
	Linux force-systemd-flag-637800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:54:00 force-systemd-flag-637800 kubelet[10354]: E1227 20:54:00.842207   10354 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:54:01 force-systemd-flag-637800 kubelet[10408]: E1227 20:54:01.600282   10408 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:54:02 force-systemd-flag-637800 kubelet[10424]: E1227 20:54:02.340808   10424 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:54:03 force-systemd-flag-637800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:54:03 force-systemd-flag-637800 kubelet[10513]: E1227 20:54:03.080428   10513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:54:03 force-systemd-flag-637800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:54:03 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-flag-637800 -n force-systemd-flag-637800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-flag-637800 -n force-systemd-flag-637800: exit status 6 (580.676ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:54:04.375176    3668 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-637800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-637800" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-637800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-637800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-637800: (4.4541792s)
--- FAIL: TestForceSystemdFlag (563.03s)

                                                
                                    
x
+
TestForceSystemdEnv (529.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-821200 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-821200 --memory=3072 --alsologtostderr -v=5 --driver=docker: exit status 109 (8m42.0783674s)

                                                
                                                
-- stdout --
	* [force-systemd-env-821200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-821200" primary control-plane node in "force-systemd-env-821200" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:54:08.957161    1360 out.go:360] Setting OutFile to fd 1180 ...
	I1227 20:54:09.012905    1360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:54:09.012905    1360 out.go:374] Setting ErrFile to fd 1584...
	I1227 20:54:09.012905    1360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:54:09.033521    1360 out.go:368] Setting JSON to false
	I1227 20:54:09.037278    1360 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4235,"bootTime":1766864613,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1227 20:54:09.037278    1360 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1227 20:54:09.043277    1360 out.go:179] * [force-systemd-env-821200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1227 20:54:09.048310    1360 notify.go:221] Checking for updates...
	I1227 20:54:09.050278    1360 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 20:54:09.055289    1360 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1227 20:54:09.060279    1360 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:54:09.063278    1360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:54:09.071404    1360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1227 20:54:09.077828    1360 config.go:182] Loaded profile config "cert-expiration-978000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:54:09.078268    1360 config.go:182] Loaded profile config "cert-options-955700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:54:09.078268    1360 config.go:182] Loaded profile config "running-upgrade-127300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1227 20:54:09.078268    1360 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:54:09.271978    1360 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1227 20:54:09.275973    1360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:54:09.584337    1360 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-27 20:54:09.561153733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:54:09.602332    1360 out.go:179] * Using the docker driver based on user configuration
	I1227 20:54:09.605341    1360 start.go:309] selected driver: docker
	I1227 20:54:09.605341    1360 start.go:928] validating driver "docker" against <nil>
	I1227 20:54:09.605341    1360 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:54:09.611338    1360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:54:09.878307    1360 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:92 SystemTime:2025-12-27 20:54:09.858989893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:54:09.878307    1360 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:54:09.879314    1360 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:54:09.882308    1360 out.go:179] * Using Docker Desktop driver with root privileges
	I1227 20:54:09.885327    1360 cni.go:84] Creating CNI manager for ""
	I1227 20:54:09.885327    1360 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 20:54:09.885327    1360 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 20:54:09.885327    1360 start.go:353] cluster config:
	{Name:force-systemd-env-821200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-821200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:54:09.888311    1360 out.go:179] * Starting "force-systemd-env-821200" primary control-plane node in "force-systemd-env-821200" cluster
	I1227 20:54:09.892313    1360 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 20:54:09.895317    1360 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:54:09.899792    1360 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 20:54:09.899792    1360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:54:09.900065    1360 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 20:54:09.900065    1360 cache.go:65] Caching tarball of preloaded images
	I1227 20:54:09.900130    1360 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 20:54:09.900130    1360 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 20:54:09.900842    1360 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\config.json ...
	I1227 20:54:09.901087    1360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\config.json: {Name:mk0cefbd7d7010c60e2d135f217169b239acee11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:09.983872    1360 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:54:09.983872    1360 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:54:09.983872    1360 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:54:09.983872    1360 start.go:360] acquireMachinesLock for force-systemd-env-821200: {Name:mk834a891165f5e362a55c564cf9c598c00e8ad2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:54:09.983872    1360 start.go:364] duration metric: took 0s to acquireMachinesLock for "force-systemd-env-821200"
	I1227 20:54:09.984862    1360 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-821200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-821200 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 20:54:09.984862    1360 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:54:09.988860    1360 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:54:09.988860    1360 start.go:159] libmachine.API.Create for "force-systemd-env-821200" (driver="docker")
	I1227 20:54:09.988860    1360 client.go:173] LocalClient.Create starting
	I1227 20:54:09.989860    1360 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1227 20:54:09.989860    1360 main.go:144] libmachine: Decoding PEM data...
	I1227 20:54:09.989860    1360 main.go:144] libmachine: Parsing certificate...
	I1227 20:54:09.989860    1360 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1227 20:54:09.989860    1360 main.go:144] libmachine: Decoding PEM data...
	I1227 20:54:09.989860    1360 main.go:144] libmachine: Parsing certificate...
	I1227 20:54:09.993870    1360 cli_runner.go:164] Run: docker network inspect force-systemd-env-821200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:54:10.046867    1360 cli_runner.go:211] docker network inspect force-systemd-env-821200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:54:10.049873    1360 network_create.go:284] running [docker network inspect force-systemd-env-821200] to gather additional debugging logs...
	I1227 20:54:10.049873    1360 cli_runner.go:164] Run: docker network inspect force-systemd-env-821200
	W1227 20:54:10.102868    1360 cli_runner.go:211] docker network inspect force-systemd-env-821200 returned with exit code 1
	I1227 20:54:10.102868    1360 network_create.go:287] error running [docker network inspect force-systemd-env-821200]: docker network inspect force-systemd-env-821200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-821200 not found
	I1227 20:54:10.102868    1360 network_create.go:289] output of [docker network inspect force-systemd-env-821200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-821200 not found
	
	** /stderr **
	I1227 20:54:10.105878    1360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:54:10.185863    1360 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:54:10.218072    1360 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:54:10.249427    1360 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:54:10.281293    1360 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 20:54:10.300292    1360 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00163d9e0}
	I1227 20:54:10.300292    1360 network_create.go:124] attempt to create docker network force-systemd-env-821200 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 20:54:10.305293    1360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-821200 force-systemd-env-821200
	I1227 20:54:10.524432    1360 network_create.go:108] docker network force-systemd-env-821200 192.168.85.0/24 created
	I1227 20:54:10.524432    1360 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-821200" container
	I1227 20:54:10.535441    1360 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:54:10.603432    1360 cli_runner.go:164] Run: docker volume create force-systemd-env-821200 --label name.minikube.sigs.k8s.io=force-systemd-env-821200 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:54:10.665441    1360 oci.go:103] Successfully created a docker volume force-systemd-env-821200
	I1227 20:54:10.670432    1360 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-821200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-821200 --entrypoint /usr/bin/test -v force-systemd-env-821200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:54:11.987231    1360 cli_runner.go:217] Completed: docker run --rm --name force-systemd-env-821200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-821200 --entrypoint /usr/bin/test -v force-systemd-env-821200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.3167832s)
	I1227 20:54:11.987231    1360 oci.go:107] Successfully prepared a docker volume force-systemd-env-821200
	I1227 20:54:11.987231    1360 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 20:54:11.987231    1360 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:54:11.992226    1360 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-821200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:54:22.973119    1360 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-821200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (10.980756s)
	I1227 20:54:22.973119    1360 kic.go:203] duration metric: took 10.9857501s to extract preloaded images to volume ...
	I1227 20:54:22.978132    1360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:54:23.303948    1360 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-27 20:54:23.281446244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:54:23.308941    1360 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:54:23.660367    1360 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-821200 --name force-systemd-env-821200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-821200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-821200 --network force-systemd-env-821200 --ip 192.168.85.2 --volume force-systemd-env-821200:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:54:24.450778    1360 cli_runner.go:164] Run: docker container inspect force-systemd-env-821200 --format={{.State.Running}}
	I1227 20:54:24.522770    1360 cli_runner.go:164] Run: docker container inspect force-systemd-env-821200 --format={{.State.Status}}
	I1227 20:54:24.584783    1360 cli_runner.go:164] Run: docker exec force-systemd-env-821200 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:54:24.703540    1360 oci.go:144] the created container "force-systemd-env-821200" has a running status.
	I1227 20:54:24.703540    1360 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa...
	I1227 20:54:24.880615    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 20:54:24.892629    1360 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:54:24.970633    1360 cli_runner.go:164] Run: docker container inspect force-systemd-env-821200 --format={{.State.Status}}
	I1227 20:54:25.036631    1360 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:54:25.036631    1360 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-821200 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:54:25.157621    1360 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa...
	I1227 20:54:27.584655    1360 cli_runner.go:164] Run: docker container inspect force-systemd-env-821200 --format={{.State.Status}}
	I1227 20:54:27.641813    1360 machine.go:94] provisionDockerMachine start ...
	I1227 20:54:27.646166    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:27.706566    1360 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:27.721765    1360 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60743 <nil> <nil>}
	I1227 20:54:27.721765    1360 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:54:27.911991    1360 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-821200
	
	I1227 20:54:27.911991    1360 ubuntu.go:182] provisioning hostname "force-systemd-env-821200"
	I1227 20:54:27.914978    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:27.972592    1360 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:27.973596    1360 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60743 <nil> <nil>}
	I1227 20:54:27.973596    1360 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-821200 && echo "force-systemd-env-821200" | sudo tee /etc/hostname
	I1227 20:54:28.147894    1360 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-821200
	
	I1227 20:54:28.153085    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:28.209239    1360 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:28.210229    1360 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60743 <nil> <nil>}
	I1227 20:54:28.210229    1360 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-821200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-821200/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-821200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:54:28.370633    1360 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:54:28.370633    1360 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1227 20:54:28.370633    1360 ubuntu.go:190] setting up certificates
	I1227 20:54:28.370633    1360 provision.go:84] configureAuth start
	I1227 20:54:28.374626    1360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-821200
	I1227 20:54:28.427617    1360 provision.go:143] copyHostCerts
	I1227 20:54:28.427617    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1227 20:54:28.428623    1360 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1227 20:54:28.428623    1360 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1227 20:54:28.428623    1360 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1227 20:54:28.429629    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1227 20:54:28.429629    1360 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1227 20:54:28.429629    1360 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1227 20:54:28.429629    1360 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1227 20:54:28.430626    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1227 20:54:28.430626    1360 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1227 20:54:28.430626    1360 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1227 20:54:28.431622    1360 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1227 20:54:28.431622    1360 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-env-821200 san=[127.0.0.1 192.168.85.2 force-systemd-env-821200 localhost minikube]
	I1227 20:54:28.623181    1360 provision.go:177] copyRemoteCerts
	I1227 20:54:28.627186    1360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:54:28.630180    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:28.683168    1360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa Username:docker}
	I1227 20:54:28.810381    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1227 20:54:28.811378    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 20:54:28.838380    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1227 20:54:28.838380    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1227 20:54:28.865385    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1227 20:54:28.865385    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:54:28.894054    1360 provision.go:87] duration metric: took 523.4142ms to configureAuth
	I1227 20:54:28.894054    1360 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:54:28.895053    1360 config.go:182] Loaded profile config "force-systemd-env-821200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:54:28.898047    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:28.950049    1360 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:28.950049    1360 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60743 <nil> <nil>}
	I1227 20:54:28.950049    1360 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 20:54:29.123498    1360 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 20:54:29.123498    1360 ubuntu.go:71] root file system type: overlay
	I1227 20:54:29.123498    1360 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 20:54:29.128703    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:29.181830    1360 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:29.181830    1360 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60743 <nil> <nil>}
	I1227 20:54:29.181830    1360 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 20:54:29.355155    1360 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 20:54:29.358155    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:29.411148    1360 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:29.412150    1360 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 60743 <nil> <nil>}
	I1227 20:54:29.412150    1360 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 20:54:37.697185    1360 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 20:54:29.337291676 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 20:54:37.698183    1360 machine.go:97] duration metric: took 10.0561773s to provisionDockerMachine
	I1227 20:54:37.698183    1360 client.go:176] duration metric: took 27.7089749s to LocalClient.Create
	I1227 20:54:37.698183    1360 start.go:167] duration metric: took 27.7089749s to libmachine.API.Create "force-systemd-env-821200"
	I1227 20:54:37.698183    1360 start.go:293] postStartSetup for "force-systemd-env-821200" (driver="docker")
	I1227 20:54:37.698183    1360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:54:37.703198    1360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:54:37.706181    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:37.756184    1360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa Username:docker}
	I1227 20:54:37.903690    1360 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:54:37.911670    1360 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:54:37.911670    1360 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:54:37.911670    1360 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1227 20:54:37.911670    1360 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1227 20:54:37.912673    1360 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> 136562.pem in /etc/ssl/certs
	I1227 20:54:37.912673    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> /etc/ssl/certs/136562.pem
	I1227 20:54:37.917679    1360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:54:37.931681    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem --> /etc/ssl/certs/136562.pem (1708 bytes)
	I1227 20:54:37.973679    1360 start.go:296] duration metric: took 275.4926ms for postStartSetup
	I1227 20:54:37.979681    1360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-821200
	I1227 20:54:38.047674    1360 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\config.json ...
	I1227 20:54:38.055684    1360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:54:38.059677    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:38.119682    1360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa Username:docker}
	I1227 20:54:38.396690    1360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:54:38.409689    1360 start.go:128] duration metric: took 28.4234677s to createHost
	I1227 20:54:38.409689    1360 start.go:83] releasing machines lock for "force-systemd-env-821200", held for 28.425459s
	I1227 20:54:38.414685    1360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-821200
	I1227 20:54:38.470686    1360 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1227 20:54:38.474681    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:38.475691    1360 ssh_runner.go:195] Run: cat /version.json
	I1227 20:54:38.478684    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:38.542690    1360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa Username:docker}
	I1227 20:54:38.542690    1360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-821200\id_rsa Username:docker}
	W1227 20:54:38.668689    1360 start.go:879] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1227 20:54:38.674695    1360 ssh_runner.go:195] Run: systemctl --version
	I1227 20:54:38.693695    1360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:54:38.705694    1360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:54:38.712697    1360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:54:38.769691    1360 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 20:54:38.769691    1360 start.go:496] detecting cgroup driver to use...
	I1227 20:54:38.769691    1360 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:54:38.769691    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1227 20:54:38.773683    1360 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1227 20:54:38.773683    1360 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1227 20:54:38.800700    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 20:54:38.827686    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 20:54:38.844692    1360 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 20:54:38.849689    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 20:54:38.875713    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:54:38.900697    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 20:54:38.926695    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:54:38.948685    1360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:54:38.970726    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 20:54:39.000710    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 20:54:39.030701    1360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 20:54:39.053710    1360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:54:39.074687    1360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:54:39.093710    1360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:54:39.288235    1360 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 20:54:39.465010    1360 start.go:496] detecting cgroup driver to use...
	I1227 20:54:39.465010    1360 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:54:39.471898    1360 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 20:54:39.498902    1360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:54:39.527094    1360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:54:39.612861    1360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:54:39.642352    1360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 20:54:39.667189    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:54:39.701371    1360 ssh_runner.go:195] Run: which cri-dockerd
	I1227 20:54:39.719356    1360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 20:54:39.735366    1360 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 20:54:39.760370    1360 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 20:54:39.930777    1360 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 20:54:40.079436    1360 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 20:54:40.079436    1360 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 20:54:40.109797    1360 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 20:54:40.132052    1360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:54:40.274667    1360 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 20:54:41.511388    1360 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2367054s)
	I1227 20:54:41.515403    1360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:54:41.706190    1360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 20:54:41.746056    1360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 20:54:41.779343    1360 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 20:54:41.978686    1360 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 20:54:42.143607    1360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:54:42.426376    1360 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 20:54:42.454396    1360 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 20:54:42.478384    1360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:54:42.630622    1360 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 20:54:42.744620    1360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 20:54:42.764611    1360 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 20:54:42.768615    1360 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 20:54:42.776620    1360 start.go:574] Will wait 60s for crictl version
	I1227 20:54:42.780611    1360 ssh_runner.go:195] Run: which crictl
	I1227 20:54:42.792621    1360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:54:42.840621    1360 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 20:54:42.844616    1360 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 20:54:42.892619    1360 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 20:54:42.936619    1360 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 20:54:42.939615    1360 cli_runner.go:164] Run: docker exec -t force-systemd-env-821200 dig +short host.docker.internal
	I1227 20:54:43.118707    1360 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1227 20:54:43.123714    1360 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1227 20:54:43.132707    1360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:54:43.156704    1360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-821200
	I1227 20:54:43.213708    1360 kubeadm.go:884] updating cluster {Name:force-systemd-env-821200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-821200 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:54:43.213708    1360 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 20:54:43.216708    1360 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 20:54:43.260712    1360 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 20:54:43.260712    1360 docker.go:624] Images already preloaded, skipping extraction
	I1227 20:54:43.274263    1360 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 20:54:43.324414    1360 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 20:54:43.324414    1360 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:54:43.324414    1360 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1227 20:54:43.324414    1360 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-821200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-821200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:54:43.330421    1360 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 20:54:43.431452    1360 cni.go:84] Creating CNI manager for ""
	I1227 20:54:43.431452    1360 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 20:54:43.431452    1360 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:54:43.431452    1360 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-821200 NodeName:force-systemd-env-821200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:54:43.431452    1360 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-821200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:54:43.437427    1360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:54:43.452406    1360 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:54:43.456409    1360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:54:43.471438    1360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1227 20:54:43.494688    1360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:54:43.521679    1360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1227 20:54:43.546668    1360 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:54:43.554675    1360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:54:43.575668    1360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:54:43.721669    1360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:54:43.744685    1360 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200 for IP: 192.168.85.2
	I1227 20:54:43.744685    1360 certs.go:195] generating shared ca certs ...
	I1227 20:54:43.744685    1360 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:43.745689    1360 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1227 20:54:43.745689    1360 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1227 20:54:43.745689    1360 certs.go:257] generating profile certs ...
	I1227 20:54:43.746689    1360 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\client.key
	I1227 20:54:43.746689    1360 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\client.crt with IP's: []
	I1227 20:54:43.847679    1360 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\client.crt ...
	I1227 20:54:43.847679    1360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\client.crt: {Name:mkb29b19046001494911f2d748bd4f79a955ad90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:43.848672    1360 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\client.key ...
	I1227 20:54:43.848672    1360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\client.key: {Name:mkfb17cbcf5dbdb4b5fb60d998a63b85790a04c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:43.849671    1360 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.key.a4e93958
	I1227 20:54:43.849671    1360 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.crt.a4e93958 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 20:54:43.932673    1360 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.crt.a4e93958 ...
	I1227 20:54:43.932673    1360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.crt.a4e93958: {Name:mk95b8e3bcdac3c2340647075a58613058c2c539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:43.933682    1360 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.key.a4e93958 ...
	I1227 20:54:43.933682    1360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.key.a4e93958: {Name:mk7ce35974a1321e1199649c18cfcea298de5ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:43.934690    1360 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.crt.a4e93958 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.crt
	I1227 20:54:43.950680    1360 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.key.a4e93958 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.key
	I1227 20:54:43.951684    1360 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.key
	I1227 20:54:43.951684    1360 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.crt with IP's: []
	I1227 20:54:44.168370    1360 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.crt ...
	I1227 20:54:44.168370    1360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.crt: {Name:mk1592d1ca8ea85d16e105f49f2daf78c873fa72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:44.169376    1360 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.key ...
	I1227 20:54:44.169376    1360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.key: {Name:mk792b5efc1b891d1069fae618a7b8a45346f011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:44.170369    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:54:44.170369    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:54:44.170369    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:54:44.170369    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:54:44.170369    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:54:44.170369    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:54:44.170369    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:54:44.185359    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:54:44.186372    1360 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem (1338 bytes)
	W1227 20:54:44.186372    1360 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656_empty.pem, impossibly tiny 0 bytes
	I1227 20:54:44.186372    1360 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1227 20:54:44.187366    1360 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1227 20:54:44.187366    1360 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1227 20:54:44.187366    1360 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1227 20:54:44.187366    1360 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem (1708 bytes)
	I1227 20:54:44.187366    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:44.188403    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem -> /usr/share/ca-certificates/13656.pem
	I1227 20:54:44.188403    1360 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> /usr/share/ca-certificates/136562.pem
	I1227 20:54:44.188403    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:54:44.220377    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 20:54:44.251378    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:54:44.286595    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:54:44.350158    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 20:54:44.379356    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:54:44.405815    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:54:44.434688    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-821200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:54:44.464815    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:54:44.495112    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem --> /usr/share/ca-certificates/13656.pem (1338 bytes)
	I1227 20:54:44.537585    1360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem --> /usr/share/ca-certificates/136562.pem (1708 bytes)
	I1227 20:54:44.565596    1360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:54:44.600600    1360 ssh_runner.go:195] Run: openssl version
	I1227 20:54:44.617590    1360 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:44.635582    1360 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:54:44.651586    1360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:44.661399    1360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:44.666065    1360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:44.721749    1360 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:54:44.738736    1360 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:54:44.754738    1360 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13656.pem
	I1227 20:54:44.771737    1360 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13656.pem /etc/ssl/certs/13656.pem
	I1227 20:54:44.786742    1360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13656.pem
	I1227 20:54:44.794741    1360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:04 /usr/share/ca-certificates/13656.pem
	I1227 20:54:44.799749    1360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13656.pem
	I1227 20:54:44.869725    1360 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:54:44.886745    1360 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13656.pem /etc/ssl/certs/51391683.0
	I1227 20:54:44.905740    1360 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/136562.pem
	I1227 20:54:44.927733    1360 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/136562.pem /etc/ssl/certs/136562.pem
	I1227 20:54:44.945737    1360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136562.pem
	I1227 20:54:44.955727    1360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:04 /usr/share/ca-certificates/136562.pem
	I1227 20:54:44.959746    1360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136562.pem
	I1227 20:54:45.008314    1360 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:54:45.073673    1360 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/136562.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:54:45.090669    1360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:54:45.102813    1360 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:54:45.103802    1360 kubeadm.go:401] StartCluster: {Name:force-systemd-env-821200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-821200 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:54:45.108810    1360 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 20:54:45.147900    1360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:54:45.164901    1360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:54:45.178900    1360 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:54:45.181900    1360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:54:45.320989    1360 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:54:45.321041    1360 kubeadm.go:158] found existing configuration files:
	
	I1227 20:54:45.328281    1360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:54:45.388668    1360 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:54:45.394503    1360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:54:45.411678    1360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:54:45.428681    1360 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:54:45.432687    1360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:54:45.452683    1360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:54:45.466671    1360 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:54:45.470671    1360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:54:45.492689    1360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:54:45.507687    1360 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:54:45.514688    1360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:54:45.534686    1360 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:54:45.670957    1360 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1227 20:54:45.779992    1360 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:54:45.932478    1360 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:58:48.271570    1360 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:58:48.271643    1360 kubeadm.go:319] 
	I1227 20:58:48.271789    1360 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:58:48.275676    1360 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:58:48.275676    1360 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:58:48.275676    1360 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:58:48.276201    1360 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1227 20:58:48.276297    1360 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1227 20:58:48.276297    1360 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1227 20:58:48.276297    1360 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1227 20:58:48.276297    1360 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1227 20:58:48.276824    1360 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1227 20:58:48.276944    1360 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1227 20:58:48.277141    1360 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1227 20:58:48.277298    1360 kubeadm.go:319] CONFIG_INET: enabled
	I1227 20:58:48.277368    1360 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1227 20:58:48.277368    1360 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1227 20:58:48.277368    1360 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1227 20:58:48.277368    1360 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1227 20:58:48.277891    1360 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1227 20:58:48.278044    1360 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1227 20:58:48.278197    1360 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1227 20:58:48.278197    1360 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1227 20:58:48.278197    1360 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1227 20:58:48.278197    1360 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1227 20:58:48.278197    1360 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1227 20:58:48.278720    1360 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1227 20:58:48.278907    1360 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1227 20:58:48.278907    1360 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1227 20:58:48.278907    1360 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1227 20:58:48.278907    1360 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1227 20:58:48.278907    1360 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1227 20:58:48.279425    1360 kubeadm.go:319] OS: Linux
	I1227 20:58:48.279548    1360 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:58:48.279548    1360 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:58:48.279548    1360 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:58:48.279548    1360 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:58:48.279548    1360 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:58:48.280072    1360 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:58:48.280196    1360 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:58:48.280196    1360 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:58:48.280196    1360 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:58:48.280196    1360 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:58:48.280196    1360 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:58:48.280922    1360 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:58:48.280922    1360 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:58:48.591151    1360 out.go:252]   - Generating certificates and keys ...
	I1227 20:58:48.592306    1360 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:58:48.592306    1360 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:58:48.592306    1360 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:58:48.592306    1360 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:58:48.592852    1360 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:58:48.592993    1360 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:58:48.593068    1360 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:58:48.593197    1360 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-821200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:58:48.593197    1360 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:58:48.593197    1360 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-821200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:58:48.593840    1360 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:58:48.593840    1360 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:58:48.593840    1360 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:58:48.593840    1360 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:58:48.593840    1360 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:58:48.594363    1360 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:58:48.594438    1360 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:58:48.594438    1360 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:58:48.594438    1360 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:58:48.594438    1360 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:58:48.594993    1360 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:58:48.603013    1360 out.go:252]   - Booting up control plane ...
	I1227 20:58:48.603013    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:58:48.603013    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:58:48.603013    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:58:48.603013    1360 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:58:48.603013    1360 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:58:48.604025    1360 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:58:48.604025    1360 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:58:48.604025    1360 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:58:48.604025    1360 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:58:48.604025    1360 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:58:48.604025    1360 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001639008s
	I1227 20:58:48.604025    1360 kubeadm.go:319] 
	I1227 20:58:48.604997    1360 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:58:48.605082    1360 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:58:48.605257    1360 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:58:48.605257    1360 kubeadm.go:319] 
	I1227 20:58:48.605450    1360 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:58:48.605450    1360 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:58:48.605651    1360 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:58:48.605651    1360 kubeadm.go:319] 
	W1227 20:58:48.605651    1360 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-821200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-821200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001639008s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-821200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-821200 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001639008s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 20:58:48.610030    1360 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1227 20:58:49.070505    1360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:58:49.089165    1360 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:58:49.093608    1360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:58:49.106426    1360 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:58:49.106466    1360 kubeadm.go:158] found existing configuration files:
	
	I1227 20:58:49.110819    1360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:58:49.123603    1360 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:58:49.129806    1360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:58:49.146067    1360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:58:49.159729    1360 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:58:49.163925    1360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:58:49.181387    1360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:58:49.197402    1360 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:58:49.201976    1360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:58:49.219608    1360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:58:49.231396    1360 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:58:49.235844    1360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:58:49.257511    1360 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:58:49.370172    1360 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1227 20:58:49.455949    1360 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:58:49.555860    1360 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 21:02:50.141962    1360 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 21:02:50.142070    1360 kubeadm.go:319] 
	I1227 21:02:50.142644    1360 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 21:02:50.147018    1360 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 21:02:50.147069    1360 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 21:02:50.147069    1360 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 21:02:50.147609    1360 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1227 21:02:50.147785    1360 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1227 21:02:50.147785    1360 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1227 21:02:50.147785    1360 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1227 21:02:50.147785    1360 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1227 21:02:50.148314    1360 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1227 21:02:50.148445    1360 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1227 21:02:50.148670    1360 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1227 21:02:50.148670    1360 kubeadm.go:319] CONFIG_INET: enabled
	I1227 21:02:50.148670    1360 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1227 21:02:50.148670    1360 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1227 21:02:50.149199    1360 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1227 21:02:50.149601    1360 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1227 21:02:50.149755    1360 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1227 21:02:50.149958    1360 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1227 21:02:50.150180    1360 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1227 21:02:50.150211    1360 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1227 21:02:50.150211    1360 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1227 21:02:50.150211    1360 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1227 21:02:50.150211    1360 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1227 21:02:50.151671    1360 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1227 21:02:50.151671    1360 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1227 21:02:50.151671    1360 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1227 21:02:50.152197    1360 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1227 21:02:50.152520    1360 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1227 21:02:50.152718    1360 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1227 21:02:50.152904    1360 kubeadm.go:319] OS: Linux
	I1227 21:02:50.152982    1360 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 21:02:50.152982    1360 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 21:02:50.152982    1360 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 21:02:50.152982    1360 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 21:02:50.153643    1360 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 21:02:50.153773    1360 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 21:02:50.153863    1360 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 21:02:50.154068    1360 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 21:02:50.154068    1360 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 21:02:50.154068    1360 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 21:02:50.154700    1360 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 21:02:50.154837    1360 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 21:02:50.154939    1360 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 21:02:50.157147    1360 out.go:252]   - Generating certificates and keys ...
	I1227 21:02:50.157147    1360 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 21:02:50.157147    1360 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 21:02:50.159147    1360 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 21:02:50.159147    1360 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 21:02:50.159147    1360 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 21:02:50.159147    1360 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 21:02:50.159147    1360 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 21:02:50.159147    1360 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 21:02:50.159147    1360 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 21:02:50.160147    1360 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 21:02:50.160147    1360 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 21:02:50.160147    1360 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 21:02:50.160147    1360 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 21:02:50.165143    1360 out.go:252]   - Booting up control plane ...
	I1227 21:02:50.165143    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 21:02:50.165143    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 21:02:50.167140    1360 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 21:02:50.167140    1360 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 21:02:50.167140    1360 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000193739s
	I1227 21:02:50.167140    1360 kubeadm.go:319] 
	I1227 21:02:50.167140    1360 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 21:02:50.167140    1360 kubeadm.go:319] 	- The kubelet is not running
	I1227 21:02:50.168151    1360 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 21:02:50.168151    1360 kubeadm.go:319] 
	I1227 21:02:50.168151    1360 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 21:02:50.168151    1360 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 21:02:50.168151    1360 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 21:02:50.168151    1360 kubeadm.go:319] 
	I1227 21:02:50.168151    1360 kubeadm.go:403] duration metric: took 8m5.0580182s to StartCluster
	I1227 21:02:50.168151    1360 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 21:02:50.173134    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 21:02:50.242146    1360 cri.go:96] found id: ""
	I1227 21:02:50.242146    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.242146    1360 logs.go:284] No container was found matching "kube-apiserver"
	I1227 21:02:50.242146    1360 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 21:02:50.247135    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 21:02:50.306137    1360 cri.go:96] found id: ""
	I1227 21:02:50.306137    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.306137    1360 logs.go:284] No container was found matching "etcd"
	I1227 21:02:50.306137    1360 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 21:02:50.309804    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 21:02:50.358810    1360 cri.go:96] found id: ""
	I1227 21:02:50.358810    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.358810    1360 logs.go:284] No container was found matching "coredns"
	I1227 21:02:50.358810    1360 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 21:02:50.362801    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 21:02:50.412669    1360 cri.go:96] found id: ""
	I1227 21:02:50.412669    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.412669    1360 logs.go:284] No container was found matching "kube-scheduler"
	I1227 21:02:50.412669    1360 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 21:02:50.417272    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 21:02:50.468017    1360 cri.go:96] found id: ""
	I1227 21:02:50.468017    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.468017    1360 logs.go:284] No container was found matching "kube-proxy"
	I1227 21:02:50.468017    1360 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 21:02:50.472018    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 21:02:50.521254    1360 cri.go:96] found id: ""
	I1227 21:02:50.521254    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.521254    1360 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 21:02:50.521254    1360 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 21:02:50.527192    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 21:02:50.604979    1360 cri.go:96] found id: ""
	I1227 21:02:50.604979    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.604979    1360 logs.go:284] No container was found matching "kindnet"
	I1227 21:02:50.604979    1360 logs.go:123] Gathering logs for dmesg ...
	I1227 21:02:50.604979    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 21:02:50.653991    1360 logs.go:123] Gathering logs for describe nodes ...
	I1227 21:02:50.653991    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 21:02:50.758843    1360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 21:02:50.751694   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.753305   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.754413   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.755496   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.756399   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 21:02:50.751694   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.753305   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.754413   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.755496   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.756399   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 21:02:50.758843    1360 logs.go:123] Gathering logs for Docker ...
	I1227 21:02:50.758843    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 21:02:50.789836    1360 logs.go:123] Gathering logs for container status ...
	I1227 21:02:50.789836    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 21:02:50.842866    1360 logs.go:123] Gathering logs for kubelet ...
	I1227 21:02:50.842866    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1227 21:02:50.915706    1360 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000193739s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 21:02:50.915706    1360 out.go:285] * 
	* 
	W1227 21:02:50.915706    1360 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000193739s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000193739s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 21:02:50.915706    1360 out.go:285] * 
	* 
	W1227 21:02:50.916413    1360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 21:02:50.922522    1360 out.go:203] 
	W1227 21:02:50.926991    1360 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000193739s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000193739s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 21:02:50.926991    1360 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 21:02:50.926991    1360 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 21:02:50.929002    1360 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-821200 --memory=3072 --alsologtostderr -v=5 --driver=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-821200 ssh "docker info --format {{.CgroupDriver}}"
E1227 21:02:51.792228   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-27 21:02:52.0193788 +0000 UTC m=+4044.143971601
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-821200
helpers_test.go:244: (dbg) docker inspect force-systemd-env-821200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "57c37911984047798965cc5f2a2a7e1cf5a5b4366cd27dc01887f6d07412625e",
	        "Created": "2025-12-27T20:54:23.710060881Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 271382,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:54:24.027125431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/57c37911984047798965cc5f2a2a7e1cf5a5b4366cd27dc01887f6d07412625e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/57c37911984047798965cc5f2a2a7e1cf5a5b4366cd27dc01887f6d07412625e/hostname",
	        "HostsPath": "/var/lib/docker/containers/57c37911984047798965cc5f2a2a7e1cf5a5b4366cd27dc01887f6d07412625e/hosts",
	        "LogPath": "/var/lib/docker/containers/57c37911984047798965cc5f2a2a7e1cf5a5b4366cd27dc01887f6d07412625e/57c37911984047798965cc5f2a2a7e1cf5a5b4366cd27dc01887f6d07412625e-json.log",
	        "Name": "/force-systemd-env-821200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-821200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-821200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0055cbfec6c7119b9b18fddafd4af476032400bd53644c9e26b526b33cd7955b-init/diff:/var/lib/docker/overlay2/cc9bc6a1bc34df01fcf2646a74af47280e16e85e4444f747f528eb17ae725d09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0055cbfec6c7119b9b18fddafd4af476032400bd53644c9e26b526b33cd7955b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0055cbfec6c7119b9b18fddafd4af476032400bd53644c9e26b526b33cd7955b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0055cbfec6c7119b9b18fddafd4af476032400bd53644c9e26b526b33cd7955b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-821200",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-821200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-821200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-821200",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-821200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05a3d8982ee26f74b517f37f436ba45b000730d6b4e9b4d1879ace510f140326",
	            "SandboxKey": "/var/run/docker/netns/05a3d8982ee2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60743"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60744"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60745"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60746"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60742"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-821200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e5226489921402aacd3739097d31d60f208b19afee96cc88d9518d4627f646b6",
	                    "EndpointID": "d5b33fdae31e3dc50e4bc1bd386e9d80c1fb9f04195d7a643f5968f43b01b6db",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-821200",
	                        "57c379119840"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-821200 -n force-systemd-env-821200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-821200 -n force-systemd-env-821200: exit status 6 (612.5812ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 21:02:52.656966     716 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-821200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-821200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-821200 logs -n 25: (1.2166974s)
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                      │         PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-630300 sudo docker system info                                        │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo cat /etc/resolv.conf                                   │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo systemctl status cri-docker --all --full --no-pager       │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo crictl pods                                            │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo systemctl cat cri-docker --no-pager                       │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo crictl ps --all                                        │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf  │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo cat /usr/lib/systemd/system/cri-docker.service            │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo cri-dockerd --version                                     │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo systemctl status containerd --all --full --no-pager       │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \; │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo systemctl cat containerd --no-pager                       │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo ip a s                                                 │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo cat /lib/systemd/system/containerd.service                │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo ip r s                                                 │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo cat /etc/containerd/config.toml                           │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo iptables-save                                          │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo containerd config dump                                    │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo iptables -t nat -L -n -v                               │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo systemctl status crio --all --full --no-pager             │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │                     │
	│ ssh     │ force-systemd-env-821200 ssh docker info --format {{.CgroupDriver}}           │ force-systemd-env-821200 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo systemctl cat crio --no-pager                             │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo cat /run/flannel/subnet.env                            │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p auto-630300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;   │ auto-630300              │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │ 27 Dec 25 21:02 UTC │
	│ ssh     │ -p flannel-630300 sudo cat /etc/kube-flannel/cni-conf.json                    │ flannel-630300           │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 21:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 21:01:50
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 21:01:50.815645    9692 out.go:360] Setting OutFile to fd 780 ...
	I1227 21:01:50.860574    9692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 21:01:50.860574    9692 out.go:374] Setting ErrFile to fd 1252...
	I1227 21:01:50.860700    9692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 21:01:50.877675    9692 out.go:368] Setting JSON to false
	I1227 21:01:50.880675    9692 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4697,"bootTime":1766864613,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1227 21:01:50.881674    9692 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1227 21:01:50.891679    9692 out.go:179] * [enable-default-cni-630300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1227 21:01:50.896575    9692 notify.go:221] Checking for updates...
	I1227 21:01:50.899030    9692 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 21:01:50.905093    9692 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 21:01:50.910576    9692 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1227 21:01:50.917096    9692 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 21:01:50.926199    9692 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 21:01:49.580262    4492 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 21:01:49.580262    4492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 21:01:49.583264    4492 cli_runner.go:164] Run: docker container inspect flannel-630300 --format={{.State.Status}}
	I1227 21:01:49.583264    4492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-630300
	I1227 21:01:49.637783    4492 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 21:01:49.637783    4492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 21:01:49.638779    4492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61627 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-630300\id_rsa Username:docker}
	I1227 21:01:49.641773    4492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-630300
	I1227 21:01:49.690774    4492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61627 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-630300\id_rsa Username:docker}
	I1227 21:01:50.070330    4492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 21:01:50.071879    4492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 21:01:50.169281    4492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 21:01:50.272214    4492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 21:01:50.934018    4492 start.go:987] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1227 21:01:50.978025    4492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" flannel-630300
	I1227 21:01:51.013864    4492 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 21:01:50.931145    9692 config.go:182] Loaded profile config "auto-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 21:01:50.932023    9692 config.go:182] Loaded profile config "flannel-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 21:01:50.932023    9692 config.go:182] Loaded profile config "force-systemd-env-821200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 21:01:50.932023    9692 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 21:01:51.067873    9692 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1227 21:01:51.070871    9692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 21:01:51.326401    9692 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-27 21:01:51.307250362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 21:01:51.336386    9692 out.go:179] * Using the docker driver based on user configuration
	I1227 21:01:51.338385    9692 start.go:309] selected driver: docker
	I1227 21:01:51.338385    9692 start.go:928] validating driver "docker" against <nil>
	I1227 21:01:51.338385    9692 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 21:01:51.345384    9692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 21:01:51.609613    9692 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-27 21:01:51.589139103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 21:01:51.609613    9692 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	E1227 21:01:51.610814    9692 start_flags.go:488] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1227 21:01:51.610853    9692 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 21:01:51.613109    9692 out.go:179] * Using Docker Desktop driver with root privileges
	I1227 21:01:51.615216    9692 cni.go:84] Creating CNI manager for "bridge"
	I1227 21:01:51.615264    9692 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 21:01:51.615495    9692 start.go:353] cluster config:
	{Name:enable-default-cni-630300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:enable-default-cni-630300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 21:01:51.618365    9692 out.go:179] * Starting "enable-default-cni-630300" primary control-plane node in "enable-default-cni-630300" cluster
	I1227 21:01:51.622670    9692 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 21:01:51.627161    9692 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 21:01:51.631155    9692 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 21:01:51.631155    9692 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 21:01:51.632247    9692 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 21:01:51.632247    9692 cache.go:65] Caching tarball of preloaded images
	I1227 21:01:51.632247    9692 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 21:01:51.632247    9692 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 21:01:51.632884    9692 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\config.json ...
	I1227 21:01:51.632918    9692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\config.json: {Name:mk370c664fbade4d799624b1630da0327c3032db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:01:51.711980    9692 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 21:01:51.712058    9692 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 21:01:51.712058    9692 cache.go:243] Successfully downloaded all kic artifacts
	I1227 21:01:51.712159    9692 start.go:360] acquireMachinesLock for enable-default-cni-630300: {Name:mk8a730b61a5f41583f3a132c744f7b69e511f1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 21:01:51.712186    9692 start.go:364] duration metric: took 0s to acquireMachinesLock for "enable-default-cni-630300"
	I1227 21:01:51.712186    9692 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-630300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:enable-default-cni-630300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 21:01:51.712186    9692 start.go:125] createHost starting for "" (driver="docker")
	I1227 21:01:51.016866    4492 addons.go:530] duration metric: took 1.5110187s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 21:01:51.038876    4492 node_ready.go:35] waiting up to 15m0s for node "flannel-630300" to be "Ready" ...
	I1227 21:01:51.445062    4492 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-630300" context rescaled to 1 replicas
	W1227 21:01:50.327212    8560 pod_ready.go:104] pod "coredns-7d764666f9-f5bhp" is not "Ready", error: <nil>
	W1227 21:01:52.328689    8560 pod_ready.go:104] pod "coredns-7d764666f9-f5bhp" is not "Ready", error: <nil>
	W1227 21:01:54.826366    8560 pod_ready.go:104] pod "coredns-7d764666f9-f5bhp" is not "Ready", error: <nil>
	I1227 21:01:51.714952    9692 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 21:01:51.715578    9692 start.go:159] libmachine.API.Create for "enable-default-cni-630300" (driver="docker")
	I1227 21:01:51.715664    9692 client.go:173] LocalClient.Create starting
	I1227 21:01:51.715767    9692 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1227 21:01:51.716342    9692 main.go:144] libmachine: Decoding PEM data...
	I1227 21:01:51.716342    9692 main.go:144] libmachine: Parsing certificate...
	I1227 21:01:51.716342    9692 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1227 21:01:51.716342    9692 main.go:144] libmachine: Decoding PEM data...
	I1227 21:01:51.716342    9692 main.go:144] libmachine: Parsing certificate...
	I1227 21:01:51.722657    9692 cli_runner.go:164] Run: docker network inspect enable-default-cni-630300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 21:01:51.775308    9692 cli_runner.go:211] docker network inspect enable-default-cni-630300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 21:01:51.779310    9692 network_create.go:284] running [docker network inspect enable-default-cni-630300] to gather additional debugging logs...
	I1227 21:01:51.780308    9692 cli_runner.go:164] Run: docker network inspect enable-default-cni-630300
	W1227 21:01:51.831923    9692 cli_runner.go:211] docker network inspect enable-default-cni-630300 returned with exit code 1
	I1227 21:01:51.831923    9692 network_create.go:287] error running [docker network inspect enable-default-cni-630300]: docker network inspect enable-default-cni-630300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-630300 not found
	I1227 21:01:51.831923    9692 network_create.go:289] output of [docker network inspect enable-default-cni-630300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-630300 not found
	
	** /stderr **
	I1227 21:01:51.834921    9692 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 21:01:51.915924    9692 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 21:01:51.930925    9692 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 21:01:51.946853    9692 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 21:01:51.962450    9692 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 21:01:51.978067    9692 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 21:01:52.009568    9692 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 21:01:52.040325    9692 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 21:01:52.055569    9692 network.go:209] skipping subnet 192.168.112.0/24 that is reserved: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1227 21:01:52.069332    9692 network.go:206] using free private subnet 192.168.121.0/24: &{IP:192.168.121.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.121.0/24 Gateway:192.168.121.1 ClientMin:192.168.121.2 ClientMax:192.168.121.254 Broadcast:192.168.121.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00191c630}
	I1227 21:01:52.069332    9692 network_create.go:124] attempt to create docker network enable-default-cni-630300 192.168.121.0/24 with gateway 192.168.121.1 and MTU of 1500 ...
	I1227 21:01:52.072686    9692 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.121.0/24 --gateway=192.168.121.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-630300 enable-default-cni-630300
	I1227 21:01:52.217142    9692 network_create.go:108] docker network enable-default-cni-630300 192.168.121.0/24 created
	I1227 21:01:52.217226    9692 kic.go:121] calculated static IP "192.168.121.2" for the "enable-default-cni-630300" container
	I1227 21:01:52.227068    9692 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 21:01:52.286681    9692 cli_runner.go:164] Run: docker volume create enable-default-cni-630300 --label name.minikube.sigs.k8s.io=enable-default-cni-630300 --label created_by.minikube.sigs.k8s.io=true
	I1227 21:01:52.341686    9692 oci.go:103] Successfully created a docker volume enable-default-cni-630300
	I1227 21:01:52.344683    9692 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-630300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-630300 --entrypoint /usr/bin/test -v enable-default-cni-630300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 21:01:53.684735    9692 cli_runner.go:217] Completed: docker run --rm --name enable-default-cni-630300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-630300 --entrypoint /usr/bin/test -v enable-default-cni-630300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.3400353s)
	I1227 21:01:53.685323    9692 oci.go:107] Successfully prepared a docker volume enable-default-cni-630300
	I1227 21:01:53.685379    9692 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 21:01:53.685379    9692 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 21:01:53.692369    9692 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-630300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	W1227 21:01:53.045054    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	W1227 21:01:55.545368    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	W1227 21:01:56.843516    8560 pod_ready.go:104] pod "coredns-7d764666f9-f5bhp" is not "Ready", error: <nil>
	W1227 21:01:59.329723    8560 pod_ready.go:104] pod "coredns-7d764666f9-f5bhp" is not "Ready", error: <nil>
	W1227 21:01:58.044044    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	W1227 21:02:00.045158    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	W1227 21:02:02.294299    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	W1227 21:02:01.827239    8560 pod_ready.go:104] pod "coredns-7d764666f9-f5bhp" is not "Ready", error: <nil>
	W1227 21:02:04.014723    8560 pod_ready.go:104] pod "coredns-7d764666f9-f5bhp" is not "Ready", error: <nil>
	W1227 21:02:04.545219    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	W1227 21:02:06.545634    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	W1227 21:02:06.327604    8560 pod_ready.go:104] pod "coredns-7d764666f9-f5bhp" is not "Ready", error: <nil>
	I1227 21:02:07.329209    8560 pod_ready.go:94] pod "coredns-7d764666f9-f5bhp" is "Ready"
	I1227 21:02:07.329209    8560 pod_ready.go:86] duration metric: took 26.0123576s for pod "coredns-7d764666f9-f5bhp" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:07.335791    8560 pod_ready.go:83] waiting for pod "etcd-auto-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:07.347401    8560 pod_ready.go:94] pod "etcd-auto-630300" is "Ready"
	I1227 21:02:07.347401    8560 pod_ready.go:86] duration metric: took 10.9799ms for pod "etcd-auto-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:07.353774    8560 pod_ready.go:83] waiting for pod "kube-apiserver-auto-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:07.362984    8560 pod_ready.go:94] pod "kube-apiserver-auto-630300" is "Ready"
	I1227 21:02:07.362984    8560 pod_ready.go:86] duration metric: took 8.3564ms for pod "kube-apiserver-auto-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:07.366989    8560 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:07.524630    8560 pod_ready.go:94] pod "kube-controller-manager-auto-630300" is "Ready"
	I1227 21:02:07.524704    8560 pod_ready.go:86] duration metric: took 157.7131ms for pod "kube-controller-manager-auto-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:07.723173    8560 pod_ready.go:83] waiting for pod "kube-proxy-fv54j" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:08.122805    8560 pod_ready.go:94] pod "kube-proxy-fv54j" is "Ready"
	I1227 21:02:08.122845    8560 pod_ready.go:86] duration metric: took 399.5845ms for pod "kube-proxy-fv54j" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:08.322550    8560 pod_ready.go:83] waiting for pod "kube-scheduler-auto-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:08.721877    8560 pod_ready.go:94] pod "kube-scheduler-auto-630300" is "Ready"
	I1227 21:02:08.721877    8560 pod_ready.go:86] duration metric: took 399.322ms for pod "kube-scheduler-auto-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:08.721877    8560 pod_ready.go:40] duration metric: took 33.4641191s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 21:02:08.818811    8560 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 21:02:08.821740    8560 out.go:179] * Done! kubectl is now configured to use "auto-630300" cluster and "default" namespace by default
	I1227 21:02:08.587538    9692 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-630300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (14.8949753s)
	I1227 21:02:08.587538    9692 kic.go:203] duration metric: took 14.9019651s to extract preloaded images to volume ...
	I1227 21:02:08.592097    9692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 21:02:08.827626    9692 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-27 21:02:08.807227454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 21:02:08.831190    9692 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 21:02:09.074725    9692 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-630300 --name enable-default-cni-630300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-630300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-630300 --network enable-default-cni-630300 --ip 192.168.121.2 --volume enable-default-cni-630300:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 21:02:09.748349    9692 cli_runner.go:164] Run: docker container inspect enable-default-cni-630300 --format={{.State.Running}}
	I1227 21:02:09.825628    9692 cli_runner.go:164] Run: docker container inspect enable-default-cni-630300 --format={{.State.Status}}
	I1227 21:02:09.896133    9692 cli_runner.go:164] Run: docker exec enable-default-cni-630300 stat /var/lib/dpkg/alternatives/iptables
	I1227 21:02:10.014796    9692 oci.go:144] the created container "enable-default-cni-630300" has a running status.
	I1227 21:02:10.014796    9692 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa...
	I1227 21:02:10.186858    9692 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 21:02:10.277538    9692 cli_runner.go:164] Run: docker container inspect enable-default-cni-630300 --format={{.State.Status}}
	I1227 21:02:10.338520    9692 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 21:02:10.338520    9692 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-630300 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 21:02:10.483845    9692 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa...
	W1227 21:02:08.545717    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	W1227 21:02:10.545835    4492 node_ready.go:57] node "flannel-630300" has "Ready":"False" status (will retry)
	I1227 21:02:11.545214    4492 node_ready.go:49] node "flannel-630300" is "Ready"
	I1227 21:02:11.545749    4492 node_ready.go:38] duration metric: took 20.5066066s for node "flannel-630300" to be "Ready" ...
	I1227 21:02:11.545811    4492 api_server.go:52] waiting for apiserver process to appear ...
	I1227 21:02:11.551317    4492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 21:02:11.572525    4492 api_server.go:72] duration metric: took 22.0664099s to wait for apiserver process to appear ...
	I1227 21:02:11.572525    4492 api_server.go:88] waiting for apiserver healthz status ...
	I1227 21:02:11.573523    4492 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:61631/healthz ...
	I1227 21:02:11.584533    4492 api_server.go:325] https://127.0.0.1:61631/healthz returned 200:
	ok
	I1227 21:02:11.587524    4492 api_server.go:141] control plane version: v1.35.0
	I1227 21:02:11.587524    4492 api_server.go:131] duration metric: took 14.999ms to wait for apiserver health ...
	I1227 21:02:11.587524    4492 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 21:02:11.593523    4492 system_pods.go:59] 7 kube-system pods found
	I1227 21:02:11.593523    4492 system_pods.go:61] "coredns-7d764666f9-xjv8b" [817e3124-dd78-4cbc-b3ff-b8e4e7ce903e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:11.593523    4492 system_pods.go:61] "etcd-flannel-630300" [7e89967d-f50d-4e23-ae14-3614f808f56a] Running
	I1227 21:02:11.593523    4492 system_pods.go:61] "kube-apiserver-flannel-630300" [83c51090-6405-4b61-aca8-ddcfb093117b] Running
	I1227 21:02:11.593523    4492 system_pods.go:61] "kube-controller-manager-flannel-630300" [236d7801-8a9a-4480-a999-85da5371a056] Running
	I1227 21:02:11.593523    4492 system_pods.go:61] "kube-proxy-jkrnl" [09ea1188-1d8b-4856-8359-3f74341b5ebb] Running
	I1227 21:02:11.593523    4492 system_pods.go:61] "kube-scheduler-flannel-630300" [8f1fec59-addc-4d78-8a9d-0c98e2dd493d] Running
	I1227 21:02:11.593523    4492 system_pods.go:61] "storage-provisioner" [4ceea418-9c3b-4e14-bf4f-f912f90afcc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:11.594518    4492 system_pods.go:74] duration metric: took 6.9942ms to wait for pod list to return data ...
	I1227 21:02:11.594518    4492 default_sa.go:34] waiting for default service account to be created ...
	I1227 21:02:11.599520    4492 default_sa.go:45] found service account: "default"
	I1227 21:02:11.599520    4492 default_sa.go:55] duration metric: took 5.0015ms for default service account to be created ...
	I1227 21:02:11.599520    4492 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 21:02:11.604524    4492 system_pods.go:86] 7 kube-system pods found
	I1227 21:02:11.604524    4492 system_pods.go:89] "coredns-7d764666f9-xjv8b" [817e3124-dd78-4cbc-b3ff-b8e4e7ce903e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:11.604524    4492 system_pods.go:89] "etcd-flannel-630300" [7e89967d-f50d-4e23-ae14-3614f808f56a] Running
	I1227 21:02:11.604524    4492 system_pods.go:89] "kube-apiserver-flannel-630300" [83c51090-6405-4b61-aca8-ddcfb093117b] Running
	I1227 21:02:11.604524    4492 system_pods.go:89] "kube-controller-manager-flannel-630300" [236d7801-8a9a-4480-a999-85da5371a056] Running
	I1227 21:02:11.604524    4492 system_pods.go:89] "kube-proxy-jkrnl" [09ea1188-1d8b-4856-8359-3f74341b5ebb] Running
	I1227 21:02:11.604524    4492 system_pods.go:89] "kube-scheduler-flannel-630300" [8f1fec59-addc-4d78-8a9d-0c98e2dd493d] Running
	I1227 21:02:11.604524    4492 system_pods.go:89] "storage-provisioner" [4ceea418-9c3b-4e14-bf4f-f912f90afcc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:11.604524    4492 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 21:02:11.822573    4492 system_pods.go:86] 7 kube-system pods found
	I1227 21:02:11.822573    4492 system_pods.go:89] "coredns-7d764666f9-xjv8b" [817e3124-dd78-4cbc-b3ff-b8e4e7ce903e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:11.822573    4492 system_pods.go:89] "etcd-flannel-630300" [7e89967d-f50d-4e23-ae14-3614f808f56a] Running
	I1227 21:02:11.822573    4492 system_pods.go:89] "kube-apiserver-flannel-630300" [83c51090-6405-4b61-aca8-ddcfb093117b] Running
	I1227 21:02:11.822573    4492 system_pods.go:89] "kube-controller-manager-flannel-630300" [236d7801-8a9a-4480-a999-85da5371a056] Running
	I1227 21:02:11.822573    4492 system_pods.go:89] "kube-proxy-jkrnl" [09ea1188-1d8b-4856-8359-3f74341b5ebb] Running
	I1227 21:02:11.822573    4492 system_pods.go:89] "kube-scheduler-flannel-630300" [8f1fec59-addc-4d78-8a9d-0c98e2dd493d] Running
	I1227 21:02:11.822573    4492 system_pods.go:89] "storage-provisioner" [4ceea418-9c3b-4e14-bf4f-f912f90afcc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:12.193698    4492 system_pods.go:86] 7 kube-system pods found
	I1227 21:02:12.193698    4492 system_pods.go:89] "coredns-7d764666f9-xjv8b" [817e3124-dd78-4cbc-b3ff-b8e4e7ce903e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:12.194696    4492 system_pods.go:89] "etcd-flannel-630300" [7e89967d-f50d-4e23-ae14-3614f808f56a] Running
	I1227 21:02:12.194696    4492 system_pods.go:89] "kube-apiserver-flannel-630300" [83c51090-6405-4b61-aca8-ddcfb093117b] Running
	I1227 21:02:12.194696    4492 system_pods.go:89] "kube-controller-manager-flannel-630300" [236d7801-8a9a-4480-a999-85da5371a056] Running
	I1227 21:02:12.194696    4492 system_pods.go:89] "kube-proxy-jkrnl" [09ea1188-1d8b-4856-8359-3f74341b5ebb] Running
	I1227 21:02:12.194696    4492 system_pods.go:89] "kube-scheduler-flannel-630300" [8f1fec59-addc-4d78-8a9d-0c98e2dd493d] Running
	I1227 21:02:12.194696    4492 system_pods.go:89] "storage-provisioner" [4ceea418-9c3b-4e14-bf4f-f912f90afcc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:12.663894    4492 system_pods.go:86] 7 kube-system pods found
	I1227 21:02:12.663894    4492 system_pods.go:89] "coredns-7d764666f9-xjv8b" [817e3124-dd78-4cbc-b3ff-b8e4e7ce903e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:12.663894    4492 system_pods.go:89] "etcd-flannel-630300" [7e89967d-f50d-4e23-ae14-3614f808f56a] Running
	I1227 21:02:12.663894    4492 system_pods.go:89] "kube-apiserver-flannel-630300" [83c51090-6405-4b61-aca8-ddcfb093117b] Running
	I1227 21:02:12.663894    4492 system_pods.go:89] "kube-controller-manager-flannel-630300" [236d7801-8a9a-4480-a999-85da5371a056] Running
	I1227 21:02:12.663894    4492 system_pods.go:89] "kube-proxy-jkrnl" [09ea1188-1d8b-4856-8359-3f74341b5ebb] Running
	I1227 21:02:12.663894    4492 system_pods.go:89] "kube-scheduler-flannel-630300" [8f1fec59-addc-4d78-8a9d-0c98e2dd493d] Running
	I1227 21:02:12.663894    4492 system_pods.go:89] "storage-provisioner" [4ceea418-9c3b-4e14-bf4f-f912f90afcc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:13.206139    4492 system_pods.go:86] 7 kube-system pods found
	I1227 21:02:13.206139    4492 system_pods.go:89] "coredns-7d764666f9-xjv8b" [817e3124-dd78-4cbc-b3ff-b8e4e7ce903e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:13.206139    4492 system_pods.go:89] "etcd-flannel-630300" [7e89967d-f50d-4e23-ae14-3614f808f56a] Running
	I1227 21:02:13.206139    4492 system_pods.go:89] "kube-apiserver-flannel-630300" [83c51090-6405-4b61-aca8-ddcfb093117b] Running
	I1227 21:02:13.206139    4492 system_pods.go:89] "kube-controller-manager-flannel-630300" [236d7801-8a9a-4480-a999-85da5371a056] Running
	I1227 21:02:13.206139    4492 system_pods.go:89] "kube-proxy-jkrnl" [09ea1188-1d8b-4856-8359-3f74341b5ebb] Running
	I1227 21:02:13.206139    4492 system_pods.go:89] "kube-scheduler-flannel-630300" [8f1fec59-addc-4d78-8a9d-0c98e2dd493d] Running
	I1227 21:02:13.206139    4492 system_pods.go:89] "storage-provisioner" [4ceea418-9c3b-4e14-bf4f-f912f90afcc4] Running
	I1227 21:02:13.866699    4492 system_pods.go:86] 7 kube-system pods found
	I1227 21:02:13.866699    4492 system_pods.go:89] "coredns-7d764666f9-xjv8b" [817e3124-dd78-4cbc-b3ff-b8e4e7ce903e] Running
	I1227 21:02:13.866699    4492 system_pods.go:89] "etcd-flannel-630300" [7e89967d-f50d-4e23-ae14-3614f808f56a] Running
	I1227 21:02:13.866699    4492 system_pods.go:89] "kube-apiserver-flannel-630300" [83c51090-6405-4b61-aca8-ddcfb093117b] Running
	I1227 21:02:13.866699    4492 system_pods.go:89] "kube-controller-manager-flannel-630300" [236d7801-8a9a-4480-a999-85da5371a056] Running
	I1227 21:02:13.866699    4492 system_pods.go:89] "kube-proxy-jkrnl" [09ea1188-1d8b-4856-8359-3f74341b5ebb] Running
	I1227 21:02:13.866699    4492 system_pods.go:89] "kube-scheduler-flannel-630300" [8f1fec59-addc-4d78-8a9d-0c98e2dd493d] Running
	I1227 21:02:13.866699    4492 system_pods.go:89] "storage-provisioner" [4ceea418-9c3b-4e14-bf4f-f912f90afcc4] Running
	I1227 21:02:13.866699    4492 system_pods.go:126] duration metric: took 2.2671493s to wait for k8s-apps to be running ...
	I1227 21:02:13.866699    4492 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 21:02:13.870608    4492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 21:02:13.889611    4492 system_svc.go:56] duration metric: took 22.9119ms WaitForService to wait for kubelet
	I1227 21:02:13.889611    4492 kubeadm.go:587] duration metric: took 24.3834658s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 21:02:13.889611    4492 node_conditions.go:102] verifying NodePressure condition ...
	I1227 21:02:13.895608    4492 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1227 21:02:13.895608    4492 node_conditions.go:123] node cpu capacity is 16
	I1227 21:02:13.895608    4492 node_conditions.go:105] duration metric: took 5.9971ms to run NodePressure ...
	I1227 21:02:13.895608    4492 start.go:242] waiting for startup goroutines ...
	I1227 21:02:13.895608    4492 start.go:247] waiting for cluster config update ...
	I1227 21:02:13.895608    4492 start.go:256] writing updated cluster config ...
	I1227 21:02:13.902607    4492 ssh_runner.go:195] Run: rm -f paused
	I1227 21:02:13.909610    4492 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 21:02:13.916612    4492 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xjv8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:13.925615    4492 pod_ready.go:94] pod "coredns-7d764666f9-xjv8b" is "Ready"
	I1227 21:02:13.925615    4492 pod_ready.go:86] duration metric: took 9.0023ms for pod "coredns-7d764666f9-xjv8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:13.928665    4492 pod_ready.go:83] waiting for pod "etcd-flannel-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:13.939217    4492 pod_ready.go:94] pod "etcd-flannel-630300" is "Ready"
	I1227 21:02:13.939217    4492 pod_ready.go:86] duration metric: took 10.5528ms for pod "etcd-flannel-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:13.944230    4492 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:13.952211    4492 pod_ready.go:94] pod "kube-apiserver-flannel-630300" is "Ready"
	I1227 21:02:13.952211    4492 pod_ready.go:86] duration metric: took 7.9806ms for pod "kube-apiserver-flannel-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:13.956216    4492 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:14.317246    4492 pod_ready.go:94] pod "kube-controller-manager-flannel-630300" is "Ready"
	I1227 21:02:14.317246    4492 pod_ready.go:86] duration metric: took 361.0256ms for pod "kube-controller-manager-flannel-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:14.517714    4492 pod_ready.go:83] waiting for pod "kube-proxy-jkrnl" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:14.918494    4492 pod_ready.go:94] pod "kube-proxy-jkrnl" is "Ready"
	I1227 21:02:14.918494    4492 pod_ready.go:86] duration metric: took 400.7749ms for pod "kube-proxy-jkrnl" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:15.119554    4492 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:15.518011    4492 pod_ready.go:94] pod "kube-scheduler-flannel-630300" is "Ready"
	I1227 21:02:15.518011    4492 pod_ready.go:86] duration metric: took 398.4526ms for pod "kube-scheduler-flannel-630300" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:15.518011    4492 pod_ready.go:40] duration metric: took 1.6083803s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 21:02:15.614212    4492 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 21:02:15.616590    4492 out.go:179] * Done! kubectl is now configured to use "flannel-630300" cluster and "default" namespace by default
	I1227 21:02:13.198140    9692 cli_runner.go:164] Run: docker container inspect enable-default-cni-630300 --format={{.State.Status}}
	I1227 21:02:13.279873    9692 machine.go:94] provisionDockerMachine start ...
	I1227 21:02:13.285863    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:13.351857    9692 main.go:144] libmachine: Using SSH client type: native
	I1227 21:02:13.368865    9692 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 61743 <nil> <nil>}
	I1227 21:02:13.368865    9692 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 21:02:13.543118    9692 main.go:144] libmachine: SSH cmd err, output: <nil>: enable-default-cni-630300
	
	I1227 21:02:13.543118    9692 ubuntu.go:182] provisioning hostname "enable-default-cni-630300"
	I1227 21:02:13.547104    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:13.600108    9692 main.go:144] libmachine: Using SSH client type: native
	I1227 21:02:13.601099    9692 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 61743 <nil> <nil>}
	I1227 21:02:13.601099    9692 main.go:144] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-630300 && echo "enable-default-cni-630300" | sudo tee /etc/hostname
	I1227 21:02:13.780509    9692 main.go:144] libmachine: SSH cmd err, output: <nil>: enable-default-cni-630300
	
	I1227 21:02:13.784640    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:13.839975    9692 main.go:144] libmachine: Using SSH client type: native
	I1227 21:02:13.840102    9692 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 61743 <nil> <nil>}
	I1227 21:02:13.840102    9692 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-630300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-630300/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-630300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 21:02:14.021254    9692 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 21:02:14.021254    9692 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1227 21:02:14.021254    9692 ubuntu.go:190] setting up certificates
	I1227 21:02:14.021254    9692 provision.go:84] configureAuth start
	I1227 21:02:14.024262    9692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-630300
	I1227 21:02:14.080979    9692 provision.go:143] copyHostCerts
	I1227 21:02:14.080979    9692 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1227 21:02:14.081974    9692 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1227 21:02:14.081974    9692 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1227 21:02:14.082972    9692 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1227 21:02:14.082972    9692 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1227 21:02:14.082972    9692 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1227 21:02:14.083970    9692 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1227 21:02:14.083970    9692 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1227 21:02:14.083970    9692 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1227 21:02:14.083970    9692 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.enable-default-cni-630300 san=[127.0.0.1 192.168.121.2 enable-default-cni-630300 localhost minikube]
	I1227 21:02:14.240231    9692 provision.go:177] copyRemoteCerts
	I1227 21:02:14.244227    9692 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 21:02:14.248231    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:14.310229    9692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa Username:docker}
	I1227 21:02:14.442594    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 21:02:14.474386    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 21:02:14.503417    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 21:02:14.537035    9692 provision.go:87] duration metric: took 515.7746ms to configureAuth
	I1227 21:02:14.537035    9692 ubuntu.go:206] setting minikube options for container-runtime
	I1227 21:02:14.537636    9692 config.go:182] Loaded profile config "enable-default-cni-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 21:02:14.541776    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:14.599202    9692 main.go:144] libmachine: Using SSH client type: native
	I1227 21:02:14.600204    9692 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 61743 <nil> <nil>}
	I1227 21:02:14.600204    9692 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 21:02:14.768369    9692 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 21:02:14.768369    9692 ubuntu.go:71] root file system type: overlay
	I1227 21:02:14.769369    9692 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 21:02:14.774383    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:14.829366    9692 main.go:144] libmachine: Using SSH client type: native
	I1227 21:02:14.830371    9692 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 61743 <nil> <nil>}
	I1227 21:02:14.830371    9692 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 21:02:15.010317    9692 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 21:02:15.015311    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:15.079520    9692 main.go:144] libmachine: Using SSH client type: native
	I1227 21:02:15.079520    9692 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil>  [] 0s} 127.0.0.1 61743 <nil> <nil>}
	I1227 21:02:15.079520    9692 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 21:02:17.971110    9692 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 21:02:15.005468200 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 21:02:17.971110    9692 machine.go:97] duration metric: took 4.6911757s to provisionDockerMachine
	I1227 21:02:17.971110    9692 client.go:176] duration metric: took 26.2551044s to LocalClient.Create
	I1227 21:02:17.971647    9692 start.go:167] duration metric: took 26.2557275s to libmachine.API.Create "enable-default-cni-630300"
	I1227 21:02:17.971725    9692 start.go:293] postStartSetup for "enable-default-cni-630300" (driver="docker")
	I1227 21:02:17.971757    9692 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 21:02:17.976233    9692 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 21:02:17.979472    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:18.034875    9692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa Username:docker}
	I1227 21:02:18.177748    9692 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 21:02:18.184939    9692 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 21:02:18.184939    9692 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 21:02:18.184939    9692 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1227 21:02:18.184939    9692 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1227 21:02:18.185917    9692 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> 136562.pem in /etc/ssl/certs
	I1227 21:02:18.190064    9692 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 21:02:18.202325    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem --> /etc/ssl/certs/136562.pem (1708 bytes)
	I1227 21:02:18.234933    9692 start.go:296] duration metric: took 263.1728ms for postStartSetup
	I1227 21:02:18.242296    9692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-630300
	I1227 21:02:18.295420    9692 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\config.json ...
	I1227 21:02:18.302084    9692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 21:02:18.305568    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:18.357981    9692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa Username:docker}
	I1227 21:02:18.492017    9692 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 21:02:18.501934    9692 start.go:128] duration metric: took 26.7893992s to createHost
	I1227 21:02:18.501934    9692 start.go:83] releasing machines lock for "enable-default-cni-630300", held for 26.7893992s
	I1227 21:02:18.506250    9692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-630300
	I1227 21:02:18.561142    9692 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1227 21:02:18.564833    9692 ssh_runner.go:195] Run: cat /version.json
	I1227 21:02:18.567527    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:18.568791    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:18.624207    9692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa Username:docker}
	I1227 21:02:18.628129    9692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa Username:docker}
	W1227 21:02:18.735581    9692 start.go:879] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1227 21:02:18.762049    9692 ssh_runner.go:195] Run: systemctl --version
	I1227 21:02:18.779889    9692 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 21:02:18.794252    9692 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 21:02:18.797254    9692 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1227 21:02:18.840297    9692 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1227 21:02:18.840341    9692 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1227 21:02:18.857567    9692 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 21:02:18.857567    9692 start.go:496] detecting cgroup driver to use...
	I1227 21:02:18.857657    9692 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 21:02:18.857752    9692 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 21:02:18.884674    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 21:02:18.903085    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 21:02:18.918564    9692 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1227 21:02:18.923414    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1227 21:02:18.943111    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 21:02:18.961839    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 21:02:18.986903    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 21:02:19.007740    9692 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 21:02:19.027991    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 21:02:19.049071    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 21:02:19.070873    9692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 21:02:19.090659    9692 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 21:02:19.108298    9692 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 21:02:19.128432    9692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 21:02:19.272073    9692 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 21:02:19.434606    9692 start.go:496] detecting cgroup driver to use...
	I1227 21:02:19.434606    9692 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 21:02:19.439009    9692 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 21:02:19.467184    9692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 21:02:19.489192    9692 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 21:02:19.577600    9692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 21:02:19.599360    9692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 21:02:19.618553    9692 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 21:02:19.646496    9692 ssh_runner.go:195] Run: which cri-dockerd
	I1227 21:02:19.657940    9692 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 21:02:19.670731    9692 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 21:02:19.697737    9692 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 21:02:19.850732    9692 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 21:02:20.010437    9692 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1227 21:02:20.010437    9692 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1227 21:02:20.037572    9692 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 21:02:20.061540    9692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 21:02:20.202104    9692 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 21:02:21.019479    9692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 21:02:21.044446    9692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 21:02:21.068808    9692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 21:02:21.094713    9692 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 21:02:21.253072    9692 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 21:02:21.392490    9692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 21:02:21.535819    9692 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 21:02:21.561303    9692 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 21:02:21.584323    9692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 21:02:21.744716    9692 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 21:02:21.865174    9692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 21:02:21.883167    9692 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 21:02:21.887166    9692 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 21:02:21.894169    9692 start.go:574] Will wait 60s for crictl version
	I1227 21:02:21.899166    9692 ssh_runner.go:195] Run: which crictl
	I1227 21:02:21.909165    9692 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 21:02:21.951174    9692 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 21:02:21.954170    9692 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 21:02:21.999166    9692 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 21:02:22.038170    9692 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 21:02:22.041447    9692 cli_runner.go:164] Run: docker exec -t enable-default-cni-630300 dig +short host.docker.internal
	I1227 21:02:22.169934    9692 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1227 21:02:22.172934    9692 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1227 21:02:22.181164    9692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 21:02:22.203707    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:22.263107    9692 kubeadm.go:884] updating cluster {Name:enable-default-cni-630300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:enable-default-cni-630300 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 21:02:22.263107    9692 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 21:02:22.266129    9692 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 21:02:22.297112    9692 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 21:02:22.297112    9692 docker.go:624] Images already preloaded, skipping extraction
	I1227 21:02:22.300112    9692 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 21:02:22.333106    9692 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 21:02:22.333106    9692 cache_images.go:86] Images are preloaded, skipping loading
	I1227 21:02:22.333106    9692 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.35.0 docker true true} ...
	I1227 21:02:22.333106    9692 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-630300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:enable-default-cni-630300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1227 21:02:22.337110    9692 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 21:02:22.430938    9692 cni.go:84] Creating CNI manager for "bridge"
	I1227 21:02:22.430938    9692 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 21:02:22.430938    9692 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-630300 NodeName:enable-default-cni-630300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 21:02:22.431730    9692 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "enable-default-cni-630300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 21:02:22.437651    9692 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 21:02:22.450839    9692 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 21:02:22.453840    9692 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 21:02:22.472136    9692 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1227 21:02:22.492136    9692 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 21:02:22.512981    9692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2249 bytes)
	I1227 21:02:22.545962    9692 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1227 21:02:22.553969    9692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 21:02:22.575997    9692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 21:02:22.727413    9692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 21:02:22.749505    9692 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300 for IP: 192.168.121.2
	I1227 21:02:22.749505    9692 certs.go:195] generating shared ca certs ...
	I1227 21:02:22.749505    9692 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:22.750511    9692 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1227 21:02:22.750511    9692 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1227 21:02:22.750511    9692 certs.go:257] generating profile certs ...
	I1227 21:02:22.751349    9692 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\client.key
	I1227 21:02:22.751511    9692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\client.crt with IP's: []
	I1227 21:02:22.943512    9692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\client.crt ...
	I1227 21:02:22.943512    9692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\client.crt: {Name:mk1676a146c0c9cd6ffb4720907e79cc6fc12316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:22.944397    9692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\client.key ...
	I1227 21:02:22.944397    9692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\client.key: {Name:mka21dc1d4e97b715996f94a4f0b1f87366b3fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:22.944870    9692 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.key.d85eb67e
	I1227 21:02:22.945598    9692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.crt.d85eb67e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.121.2]
	I1227 21:02:23.080736    9692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.crt.d85eb67e ...
	I1227 21:02:23.080736    9692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.crt.d85eb67e: {Name:mk25d7dadafe23336896b58486ee1f73e67beb0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:23.081814    9692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.key.d85eb67e ...
	I1227 21:02:23.081814    9692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.key.d85eb67e: {Name:mk54b81438d2afcfaedd5da6ccd586137675d0ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:23.083054    9692 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.crt.d85eb67e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.crt
	I1227 21:02:23.102022    9692 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.key.d85eb67e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.key
	I1227 21:02:23.103759    9692 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\proxy-client.key
	I1227 21:02:23.103759    9692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\proxy-client.crt with IP's: []
	I1227 21:02:23.131791    9692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\proxy-client.crt ...
	I1227 21:02:23.131791    9692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\proxy-client.crt: {Name:mke6266963e5f4558464f85f6910729ec85f1752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:23.132366    9692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\proxy-client.key ...
	I1227 21:02:23.132366    9692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\proxy-client.key: {Name:mk3fa64cff1f4ea536cb5f1ecf6f0af4b622cebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:23.147535    9692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem (1338 bytes)
	W1227 21:02:23.147535    9692 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656_empty.pem, impossibly tiny 0 bytes
	I1227 21:02:23.147535    9692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1227 21:02:23.148111    9692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1227 21:02:23.148372    9692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1227 21:02:23.148434    9692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1227 21:02:23.148434    9692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem (1708 bytes)
	I1227 21:02:23.150083    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 21:02:23.181535    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 21:02:23.214694    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 21:02:23.248850    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 21:02:23.279802    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 21:02:23.311959    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 21:02:23.338964    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 21:02:23.365968    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-630300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 21:02:23.396730    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 21:02:23.426737    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem --> /usr/share/ca-certificates/13656.pem (1338 bytes)
	I1227 21:02:23.458440    9692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem --> /usr/share/ca-certificates/136562.pem (1708 bytes)
	I1227 21:02:23.489514    9692 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 21:02:23.514460    9692 ssh_runner.go:195] Run: openssl version
	I1227 21:02:23.532345    9692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 21:02:23.549341    9692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 21:02:23.569334    9692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 21:02:23.576334    9692 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1227 21:02:23.581336    9692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 21:02:23.637872    9692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 21:02:23.656223    9692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 21:02:23.680038    9692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13656.pem
	I1227 21:02:23.700007    9692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13656.pem /etc/ssl/certs/13656.pem
	I1227 21:02:23.716256    9692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13656.pem
	I1227 21:02:23.726056    9692 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:04 /usr/share/ca-certificates/13656.pem
	I1227 21:02:23.730914    9692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13656.pem
	I1227 21:02:23.780676    9692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 21:02:23.797680    9692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13656.pem /etc/ssl/certs/51391683.0
	I1227 21:02:23.817606    9692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/136562.pem
	I1227 21:02:23.837743    9692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/136562.pem /etc/ssl/certs/136562.pem
	I1227 21:02:23.853635    9692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136562.pem
	I1227 21:02:23.860679    9692 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:04 /usr/share/ca-certificates/136562.pem
	I1227 21:02:23.865228    9692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136562.pem
	I1227 21:02:23.923714    9692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 21:02:23.943302    9692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/136562.pem /etc/ssl/certs/3ec20f2e.0
	I1227 21:02:23.963369    9692 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 21:02:23.970102    9692 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 21:02:23.970102    9692 kubeadm.go:401] StartCluster: {Name:enable-default-cni-630300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:enable-default-cni-630300 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 21:02:23.974401    9692 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 21:02:24.009970    9692 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 21:02:24.027960    9692 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 21:02:24.040970    9692 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 21:02:24.044974    9692 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 21:02:24.060976    9692 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 21:02:24.060976    9692 kubeadm.go:158] found existing configuration files:
	
	I1227 21:02:24.064931    9692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 21:02:24.077934    9692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 21:02:24.080937    9692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 21:02:24.098933    9692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 21:02:24.111939    9692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 21:02:24.114932    9692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 21:02:24.129931    9692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 21:02:24.141936    9692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 21:02:24.145934    9692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 21:02:24.160932    9692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 21:02:24.172938    9692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 21:02:24.176938    9692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 21:02:24.193946    9692 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 21:02:24.369809    9692 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1227 21:02:24.482648    9692 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 21:02:24.624641    9692 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 21:02:36.358769    9692 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 21:02:36.359008    9692 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 21:02:36.359202    9692 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 21:02:36.359202    9692 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1227 21:02:36.359202    9692 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1227 21:02:36.359202    9692 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1227 21:02:36.359875    9692 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1227 21:02:36.360096    9692 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1227 21:02:36.360304    9692 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1227 21:02:36.360494    9692 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1227 21:02:36.360563    9692 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1227 21:02:36.360707    9692 kubeadm.go:319] CONFIG_INET: enabled
	I1227 21:02:36.360890    9692 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1227 21:02:36.361126    9692 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1227 21:02:36.361387    9692 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1227 21:02:36.361587    9692 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1227 21:02:36.361975    9692 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1227 21:02:36.362152    9692 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1227 21:02:36.362437    9692 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1227 21:02:36.362567    9692 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1227 21:02:36.362769    9692 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1227 21:02:36.362957    9692 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1227 21:02:36.363265    9692 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1227 21:02:36.363374    9692 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1227 21:02:36.363621    9692 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1227 21:02:36.363904    9692 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1227 21:02:36.363994    9692 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1227 21:02:36.364179    9692 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1227 21:02:36.364475    9692 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1227 21:02:36.364568    9692 kubeadm.go:319] OS: Linux
	I1227 21:02:36.364655    9692 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 21:02:36.364655    9692 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 21:02:36.364655    9692 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 21:02:36.364655    9692 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 21:02:36.364655    9692 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 21:02:36.365190    9692 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 21:02:36.365474    9692 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 21:02:36.365474    9692 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 21:02:36.365474    9692 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 21:02:36.365474    9692 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 21:02:36.365474    9692 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 21:02:36.366008    9692 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 21:02:36.366148    9692 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 21:02:36.368169    9692 out.go:252]   - Generating certificates and keys ...
	I1227 21:02:36.368169    9692 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 21:02:36.368169    9692 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 21:02:36.368773    9692 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 21:02:36.368773    9692 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 21:02:36.368773    9692 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 21:02:36.368773    9692 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 21:02:36.368773    9692 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 21:02:36.368773    9692 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-630300 localhost] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1227 21:02:36.369774    9692 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 21:02:36.369774    9692 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-630300 localhost] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1227 21:02:36.369774    9692 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 21:02:36.370451    9692 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 21:02:36.370495    9692 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 21:02:36.370495    9692 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 21:02:36.370495    9692 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 21:02:36.370495    9692 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 21:02:36.370495    9692 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 21:02:36.370495    9692 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 21:02:36.370495    9692 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 21:02:36.371498    9692 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 21:02:36.371498    9692 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 21:02:36.373529    9692 out.go:252]   - Booting up control plane ...
	I1227 21:02:36.373529    9692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 21:02:36.373529    9692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 21:02:36.373529    9692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 21:02:36.374774    9692 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 21:02:36.374774    9692 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 21:02:36.375706    9692 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 21:02:36.375706    9692 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 21:02:36.375706    9692 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 21:02:36.375706    9692 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 21:02:36.375706    9692 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 21:02:36.376701    9692 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 541.621181ms
	I1227 21:02:36.376701    9692 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 21:02:36.376701    9692 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.121.2:8443/livez
	I1227 21:02:36.376701    9692 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 21:02:36.376701    9692 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 21:02:36.376701    9692 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.505737432s
	I1227 21:02:36.377852    9692 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.212491446s
	I1227 21:02:36.377902    9692 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.504308189s
	I1227 21:02:36.377902    9692 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 21:02:36.377902    9692 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 21:02:36.378895    9692 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 21:02:36.378895    9692 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-630300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 21:02:36.378895    9692 kubeadm.go:319] [bootstrap-token] Using token: fo1nxy.4zky8xlq38jwrtcq
	I1227 21:02:36.381929    9692 out.go:252]   - Configuring RBAC rules ...
	I1227 21:02:36.382564    9692 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 21:02:36.382711    9692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 21:02:36.382955    9692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 21:02:36.383572    9692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 21:02:36.383722    9692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 21:02:36.383828    9692 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 21:02:36.383828    9692 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 21:02:36.384166    9692 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 21:02:36.384166    9692 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 21:02:36.384319    9692 kubeadm.go:319] 
	I1227 21:02:36.384464    9692 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 21:02:36.384561    9692 kubeadm.go:319] 
	I1227 21:02:36.384705    9692 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 21:02:36.384705    9692 kubeadm.go:319] 
	I1227 21:02:36.384705    9692 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 21:02:36.384705    9692 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 21:02:36.384705    9692 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 21:02:36.384705    9692 kubeadm.go:319] 
	I1227 21:02:36.385232    9692 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 21:02:36.385283    9692 kubeadm.go:319] 
	I1227 21:02:36.385380    9692 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 21:02:36.385380    9692 kubeadm.go:319] 
	I1227 21:02:36.385493    9692 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 21:02:36.385528    9692 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 21:02:36.385702    9692 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 21:02:36.385702    9692 kubeadm.go:319] 
	I1227 21:02:36.385702    9692 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 21:02:36.385702    9692 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 21:02:36.385702    9692 kubeadm.go:319] 
	I1227 21:02:36.386425    9692 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fo1nxy.4zky8xlq38jwrtcq \
	I1227 21:02:36.386425    9692 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2cb1a52fa604c447e145a3f28c6fc9176baea4b39df6959c9cf0292a2c1c58b2 \
	I1227 21:02:36.386425    9692 kubeadm.go:319] 	--control-plane 
	I1227 21:02:36.386425    9692 kubeadm.go:319] 
	I1227 21:02:36.386425    9692 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 21:02:36.386425    9692 kubeadm.go:319] 
	I1227 21:02:36.387057    9692 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fo1nxy.4zky8xlq38jwrtcq \
	I1227 21:02:36.387474    9692 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2cb1a52fa604c447e145a3f28c6fc9176baea4b39df6959c9cf0292a2c1c58b2 
	I1227 21:02:36.387474    9692 cni.go:84] Creating CNI manager for "bridge"
	I1227 21:02:36.391137    9692 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1227 21:02:36.399942    9692 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1227 21:02:36.457151    9692 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1227 21:02:36.479199    9692 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 21:02:36.485210    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-630300 minikube.k8s.io/updated_at=2025_12_27T21_02_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=enable-default-cni-630300 minikube.k8s.io/primary=true
	I1227 21:02:36.485210    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:36.495201    9692 ops.go:34] apiserver oom_adj: -16
	I1227 21:02:36.711287    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:37.210615    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:37.711033    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:38.211656    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:38.711842    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:39.210524    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:39.710907    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:40.211734    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:40.711291    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:41.210439    9692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 21:02:41.321194    9692 kubeadm.go:1114] duration metric: took 4.8419314s to wait for elevateKubeSystemPrivileges
	I1227 21:02:41.321194    9692 kubeadm.go:403] duration metric: took 17.3508664s to StartCluster
	I1227 21:02:41.321194    9692 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:41.321194    9692 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 21:02:41.323208    9692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 21:02:41.323208    9692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 21:02:41.324201    9692 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 21:02:41.324201    9692 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 21:02:41.324201    9692 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-630300"
	I1227 21:02:41.324201    9692 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-630300"
	I1227 21:02:41.324201    9692 host.go:66] Checking if "enable-default-cni-630300" exists ...
	I1227 21:02:41.324201    9692 config.go:182] Loaded profile config "enable-default-cni-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 21:02:41.324201    9692 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-630300"
	I1227 21:02:41.324201    9692 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-630300"
	I1227 21:02:41.332187    9692 cli_runner.go:164] Run: docker container inspect enable-default-cni-630300 --format={{.State.Status}}
	I1227 21:02:41.332187    9692 cli_runner.go:164] Run: docker container inspect enable-default-cni-630300 --format={{.State.Status}}
	I1227 21:02:41.333201    9692 out.go:179] * Verifying Kubernetes components...
	I1227 21:02:41.341202    9692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 21:02:41.392188    9692 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 21:02:41.393196    9692 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-630300"
	I1227 21:02:41.393196    9692 host.go:66] Checking if "enable-default-cni-630300" exists ...
	I1227 21:02:41.394189    9692 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 21:02:41.394189    9692 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 21:02:41.398189    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:41.400196    9692 cli_runner.go:164] Run: docker container inspect enable-default-cni-630300 --format={{.State.Status}}
	I1227 21:02:41.449199    9692 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 21:02:41.449199    9692 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 21:02:41.450192    9692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa Username:docker}
	I1227 21:02:41.452199    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:41.517267    9692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61743 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-630300\id_rsa Username:docker}
	I1227 21:02:41.573178    9692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 21:02:41.969905    9692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 21:02:41.977884    9692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 21:02:42.177295    9692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 21:02:42.660441    9692 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.0872487s)
	I1227 21:02:42.660441    9692 start.go:987] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1227 21:02:43.173440    9692 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-630300" context rescaled to 1 replicas
	I1227 21:02:43.357781    9692 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3868542s)
	I1227 21:02:43.357781    9692 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.3798792s)
	I1227 21:02:43.357781    9692 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.1804708s)
	I1227 21:02:43.364775    9692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-630300
	I1227 21:02:43.379778    9692 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 21:02:43.401769    9692 addons.go:530] duration metric: took 2.077541s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 21:02:43.427769    9692 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-630300" to be "Ready" ...
	I1227 21:02:43.459770    9692 node_ready.go:49] node "enable-default-cni-630300" is "Ready"
	I1227 21:02:43.459883    9692 node_ready.go:38] duration metric: took 32.0614ms for node "enable-default-cni-630300" to be "Ready" ...
	I1227 21:02:43.459907    9692 api_server.go:52] waiting for apiserver process to appear ...
	I1227 21:02:43.464672    9692 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 21:02:43.487666    9692 api_server.go:72] duration metric: took 2.163437s to wait for apiserver process to appear ...
	I1227 21:02:43.487666    9692 api_server.go:88] waiting for apiserver healthz status ...
	I1227 21:02:43.487666    9692 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:61747/healthz ...
	I1227 21:02:43.504224    9692 api_server.go:325] https://127.0.0.1:61747/healthz returned 200:
	ok
	I1227 21:02:43.507228    9692 api_server.go:141] control plane version: v1.35.0
	I1227 21:02:43.507228    9692 api_server.go:131] duration metric: took 19.5616ms to wait for apiserver health ...
	I1227 21:02:43.507228    9692 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 21:02:43.514247    9692 system_pods.go:59] 8 kube-system pods found
	I1227 21:02:43.514247    9692 system_pods.go:61] "coredns-7d764666f9-2dmjj" [4473a288-d441-4c50-acaa-e4b7711d12dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:43.514247    9692 system_pods.go:61] "coredns-7d764666f9-xqsmq" [990ded7d-5bda-4506-a384-49a4b1d0e0b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:43.514247    9692 system_pods.go:61] "etcd-enable-default-cni-630300" [50fd58df-31e7-4b3b-bab2-dc535874f701] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 21:02:43.514247    9692 system_pods.go:61] "kube-apiserver-enable-default-cni-630300" [95e20329-9822-4ea7-9bd6-f600bd2891e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 21:02:43.514247    9692 system_pods.go:61] "kube-controller-manager-enable-default-cni-630300" [048914cf-1670-450f-a80b-5e2b33e8a16d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 21:02:43.514247    9692 system_pods.go:61] "kube-proxy-lvhrw" [fb8e012c-16a9-4c46-bfd4-c4d986c0031f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 21:02:43.514247    9692 system_pods.go:61] "kube-scheduler-enable-default-cni-630300" [f9b92bc8-398f-4441-a49c-e8a9c66ad287] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 21:02:43.514247    9692 system_pods.go:61] "storage-provisioner" [492b83dc-fb15-4a48-9113-06d44ea87697] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:43.514247    9692 system_pods.go:74] duration metric: took 7.0197ms to wait for pod list to return data ...
	I1227 21:02:43.514247    9692 default_sa.go:34] waiting for default service account to be created ...
	I1227 21:02:43.562219    9692 default_sa.go:45] found service account: "default"
	I1227 21:02:43.562219    9692 default_sa.go:55] duration metric: took 47.971ms for default service account to be created ...
	I1227 21:02:43.562219    9692 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 21:02:43.569220    9692 system_pods.go:86] 8 kube-system pods found
	I1227 21:02:43.569220    9692 system_pods.go:89] "coredns-7d764666f9-2dmjj" [4473a288-d441-4c50-acaa-e4b7711d12dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:43.569220    9692 system_pods.go:89] "coredns-7d764666f9-xqsmq" [990ded7d-5bda-4506-a384-49a4b1d0e0b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:43.569220    9692 system_pods.go:89] "etcd-enable-default-cni-630300" [50fd58df-31e7-4b3b-bab2-dc535874f701] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 21:02:43.569220    9692 system_pods.go:89] "kube-apiserver-enable-default-cni-630300" [95e20329-9822-4ea7-9bd6-f600bd2891e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 21:02:43.569220    9692 system_pods.go:89] "kube-controller-manager-enable-default-cni-630300" [048914cf-1670-450f-a80b-5e2b33e8a16d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 21:02:43.569220    9692 system_pods.go:89] "kube-proxy-lvhrw" [fb8e012c-16a9-4c46-bfd4-c4d986c0031f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 21:02:43.569220    9692 system_pods.go:89] "kube-scheduler-enable-default-cni-630300" [f9b92bc8-398f-4441-a49c-e8a9c66ad287] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 21:02:43.569220    9692 system_pods.go:89] "storage-provisioner" [492b83dc-fb15-4a48-9113-06d44ea87697] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:43.569220    9692 retry.go:84] will retry after 300ms: missing components: kube-dns, kube-proxy
	I1227 21:02:43.839043    9692 system_pods.go:86] 8 kube-system pods found
	I1227 21:02:43.839043    9692 system_pods.go:89] "coredns-7d764666f9-2dmjj" [4473a288-d441-4c50-acaa-e4b7711d12dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:43.839043    9692 system_pods.go:89] "coredns-7d764666f9-xqsmq" [990ded7d-5bda-4506-a384-49a4b1d0e0b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:43.839043    9692 system_pods.go:89] "etcd-enable-default-cni-630300" [50fd58df-31e7-4b3b-bab2-dc535874f701] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 21:02:43.839043    9692 system_pods.go:89] "kube-apiserver-enable-default-cni-630300" [95e20329-9822-4ea7-9bd6-f600bd2891e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 21:02:43.839043    9692 system_pods.go:89] "kube-controller-manager-enable-default-cni-630300" [048914cf-1670-450f-a80b-5e2b33e8a16d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 21:02:43.839043    9692 system_pods.go:89] "kube-proxy-lvhrw" [fb8e012c-16a9-4c46-bfd4-c4d986c0031f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 21:02:43.839043    9692 system_pods.go:89] "kube-scheduler-enable-default-cni-630300" [f9b92bc8-398f-4441-a49c-e8a9c66ad287] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 21:02:43.839043    9692 system_pods.go:89] "storage-provisioner" [492b83dc-fb15-4a48-9113-06d44ea87697] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:44.166642    9692 system_pods.go:86] 8 kube-system pods found
	I1227 21:02:44.166642    9692 system_pods.go:89] "coredns-7d764666f9-2dmjj" [4473a288-d441-4c50-acaa-e4b7711d12dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:44.166642    9692 system_pods.go:89] "coredns-7d764666f9-xqsmq" [990ded7d-5bda-4506-a384-49a4b1d0e0b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:44.166642    9692 system_pods.go:89] "etcd-enable-default-cni-630300" [50fd58df-31e7-4b3b-bab2-dc535874f701] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 21:02:44.166642    9692 system_pods.go:89] "kube-apiserver-enable-default-cni-630300" [95e20329-9822-4ea7-9bd6-f600bd2891e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 21:02:44.166642    9692 system_pods.go:89] "kube-controller-manager-enable-default-cni-630300" [048914cf-1670-450f-a80b-5e2b33e8a16d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 21:02:44.166642    9692 system_pods.go:89] "kube-proxy-lvhrw" [fb8e012c-16a9-4c46-bfd4-c4d986c0031f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 21:02:44.166642    9692 system_pods.go:89] "kube-scheduler-enable-default-cni-630300" [f9b92bc8-398f-4441-a49c-e8a9c66ad287] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 21:02:44.166642    9692 system_pods.go:89] "storage-provisioner" [492b83dc-fb15-4a48-9113-06d44ea87697] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:44.561131    9692 system_pods.go:86] 8 kube-system pods found
	I1227 21:02:44.561131    9692 system_pods.go:89] "coredns-7d764666f9-2dmjj" [4473a288-d441-4c50-acaa-e4b7711d12dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:44.561131    9692 system_pods.go:89] "coredns-7d764666f9-xqsmq" [990ded7d-5bda-4506-a384-49a4b1d0e0b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:44.561131    9692 system_pods.go:89] "etcd-enable-default-cni-630300" [50fd58df-31e7-4b3b-bab2-dc535874f701] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 21:02:44.561131    9692 system_pods.go:89] "kube-apiserver-enable-default-cni-630300" [95e20329-9822-4ea7-9bd6-f600bd2891e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 21:02:44.561131    9692 system_pods.go:89] "kube-controller-manager-enable-default-cni-630300" [048914cf-1670-450f-a80b-5e2b33e8a16d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 21:02:44.561131    9692 system_pods.go:89] "kube-proxy-lvhrw" [fb8e012c-16a9-4c46-bfd4-c4d986c0031f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 21:02:44.561131    9692 system_pods.go:89] "kube-scheduler-enable-default-cni-630300" [f9b92bc8-398f-4441-a49c-e8a9c66ad287] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 21:02:44.561131    9692 system_pods.go:89] "storage-provisioner" [492b83dc-fb15-4a48-9113-06d44ea87697] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:45.140315    9692 system_pods.go:86] 8 kube-system pods found
	I1227 21:02:45.140315    9692 system_pods.go:89] "coredns-7d764666f9-2dmjj" [4473a288-d441-4c50-acaa-e4b7711d12dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:45.140315    9692 system_pods.go:89] "coredns-7d764666f9-xqsmq" [990ded7d-5bda-4506-a384-49a4b1d0e0b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 21:02:45.140315    9692 system_pods.go:89] "etcd-enable-default-cni-630300" [50fd58df-31e7-4b3b-bab2-dc535874f701] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 21:02:45.140315    9692 system_pods.go:89] "kube-apiserver-enable-default-cni-630300" [95e20329-9822-4ea7-9bd6-f600bd2891e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 21:02:45.140315    9692 system_pods.go:89] "kube-controller-manager-enable-default-cni-630300" [048914cf-1670-450f-a80b-5e2b33e8a16d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 21:02:45.140315    9692 system_pods.go:89] "kube-proxy-lvhrw" [fb8e012c-16a9-4c46-bfd4-c4d986c0031f] Running
	I1227 21:02:45.140315    9692 system_pods.go:89] "kube-scheduler-enable-default-cni-630300" [f9b92bc8-398f-4441-a49c-e8a9c66ad287] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 21:02:45.140315    9692 system_pods.go:89] "storage-provisioner" [492b83dc-fb15-4a48-9113-06d44ea87697] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 21:02:45.140315    9692 system_pods.go:126] duration metric: took 1.5780758s to wait for k8s-apps to be running ...
	I1227 21:02:45.140315    9692 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 21:02:45.144327    9692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 21:02:45.165319    9692 system_svc.go:56] duration metric: took 25.0029ms WaitForService to wait for kubelet
	I1227 21:02:45.165319    9692 kubeadm.go:587] duration metric: took 3.841068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 21:02:45.165319    9692 node_conditions.go:102] verifying NodePressure condition ...
	I1227 21:02:45.171320    9692 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1227 21:02:45.171320    9692 node_conditions.go:123] node cpu capacity is 16
	I1227 21:02:45.171320    9692 node_conditions.go:105] duration metric: took 6.0015ms to run NodePressure ...
	I1227 21:02:45.171320    9692 start.go:242] waiting for startup goroutines ...
	I1227 21:02:45.171320    9692 start.go:247] waiting for cluster config update ...
	I1227 21:02:45.171320    9692 start.go:256] writing updated cluster config ...
	I1227 21:02:45.175323    9692 ssh_runner.go:195] Run: rm -f paused
	I1227 21:02:45.183330    9692 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 21:02:45.191319    9692 pod_ready.go:83] waiting for pod "coredns-7d764666f9-2dmjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 21:02:50.141962    1360 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 21:02:50.142070    1360 kubeadm.go:319] 
	I1227 21:02:50.142644    1360 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 21:02:50.147018    1360 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 21:02:50.147069    1360 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 21:02:50.147069    1360 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 21:02:50.147609    1360 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1227 21:02:50.147785    1360 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1227 21:02:50.147785    1360 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1227 21:02:50.147785    1360 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1227 21:02:50.147785    1360 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1227 21:02:50.148314    1360 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1227 21:02:50.148445    1360 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1227 21:02:50.148670    1360 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1227 21:02:50.148670    1360 kubeadm.go:319] CONFIG_INET: enabled
	I1227 21:02:50.148670    1360 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1227 21:02:50.148670    1360 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1227 21:02:50.149199    1360 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1227 21:02:50.149601    1360 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1227 21:02:50.149755    1360 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1227 21:02:50.149958    1360 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1227 21:02:50.150180    1360 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1227 21:02:50.150211    1360 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1227 21:02:50.150211    1360 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1227 21:02:50.150211    1360 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1227 21:02:50.150211    1360 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1227 21:02:50.151671    1360 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1227 21:02:50.151671    1360 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1227 21:02:50.151671    1360 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1227 21:02:50.152197    1360 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1227 21:02:50.152520    1360 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1227 21:02:50.152718    1360 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1227 21:02:50.152904    1360 kubeadm.go:319] OS: Linux
	I1227 21:02:50.152982    1360 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 21:02:50.152982    1360 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 21:02:50.152982    1360 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 21:02:50.152982    1360 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 21:02:50.153643    1360 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 21:02:50.153773    1360 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 21:02:50.153863    1360 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 21:02:50.154068    1360 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 21:02:50.154068    1360 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 21:02:50.154068    1360 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 21:02:50.154700    1360 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 21:02:50.154837    1360 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 21:02:50.154939    1360 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 21:02:50.157147    1360 out.go:252]   - Generating certificates and keys ...
	I1227 21:02:50.157147    1360 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 21:02:50.157147    1360 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 21:02:50.158162    1360 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 21:02:50.159147    1360 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 21:02:50.159147    1360 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 21:02:50.159147    1360 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 21:02:50.159147    1360 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 21:02:50.159147    1360 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 21:02:50.159147    1360 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 21:02:50.159147    1360 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 21:02:50.160147    1360 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 21:02:50.160147    1360 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 21:02:50.160147    1360 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 21:02:50.160147    1360 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1227 21:02:47.205032    9692 pod_ready.go:104] pod "coredns-7d764666f9-2dmjj" is not "Ready", error: <nil>
	W1227 21:02:49.701093    9692 pod_ready.go:104] pod "coredns-7d764666f9-2dmjj" is not "Ready", error: <nil>
	I1227 21:02:50.165143    1360 out.go:252]   - Booting up control plane ...
	I1227 21:02:50.165143    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 21:02:50.165143    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 21:02:50.166140    1360 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 21:02:50.167140    1360 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 21:02:50.167140    1360 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 21:02:50.167140    1360 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000193739s
	I1227 21:02:50.167140    1360 kubeadm.go:319] 
	I1227 21:02:50.167140    1360 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 21:02:50.167140    1360 kubeadm.go:319] 	- The kubelet is not running
	I1227 21:02:50.168151    1360 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 21:02:50.168151    1360 kubeadm.go:319] 
	I1227 21:02:50.168151    1360 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 21:02:50.168151    1360 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 21:02:50.168151    1360 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 21:02:50.168151    1360 kubeadm.go:319] 
	I1227 21:02:50.168151    1360 kubeadm.go:403] duration metric: took 8m5.0580182s to StartCluster
	I1227 21:02:50.168151    1360 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 21:02:50.173134    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 21:02:50.242146    1360 cri.go:96] found id: ""
	I1227 21:02:50.242146    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.242146    1360 logs.go:284] No container was found matching "kube-apiserver"
	I1227 21:02:50.242146    1360 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 21:02:50.247135    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 21:02:50.306137    1360 cri.go:96] found id: ""
	I1227 21:02:50.306137    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.306137    1360 logs.go:284] No container was found matching "etcd"
	I1227 21:02:50.306137    1360 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 21:02:50.309804    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 21:02:50.358810    1360 cri.go:96] found id: ""
	I1227 21:02:50.358810    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.358810    1360 logs.go:284] No container was found matching "coredns"
	I1227 21:02:50.358810    1360 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 21:02:50.362801    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 21:02:50.412669    1360 cri.go:96] found id: ""
	I1227 21:02:50.412669    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.412669    1360 logs.go:284] No container was found matching "kube-scheduler"
	I1227 21:02:50.412669    1360 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 21:02:50.417272    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 21:02:50.468017    1360 cri.go:96] found id: ""
	I1227 21:02:50.468017    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.468017    1360 logs.go:284] No container was found matching "kube-proxy"
	I1227 21:02:50.468017    1360 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 21:02:50.472018    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 21:02:50.521254    1360 cri.go:96] found id: ""
	I1227 21:02:50.521254    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.521254    1360 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 21:02:50.521254    1360 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 21:02:50.527192    1360 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 21:02:50.604979    1360 cri.go:96] found id: ""
	I1227 21:02:50.604979    1360 logs.go:282] 0 containers: []
	W1227 21:02:50.604979    1360 logs.go:284] No container was found matching "kindnet"
	I1227 21:02:50.604979    1360 logs.go:123] Gathering logs for dmesg ...
	I1227 21:02:50.604979    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 21:02:50.653991    1360 logs.go:123] Gathering logs for describe nodes ...
	I1227 21:02:50.653991    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 21:02:50.758843    1360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 21:02:50.751694   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.753305   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.754413   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.755496   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.756399   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 21:02:50.751694   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.753305   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.754413   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.755496   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:50.756399   10281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 21:02:50.758843    1360 logs.go:123] Gathering logs for Docker ...
	I1227 21:02:50.758843    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 21:02:50.789836    1360 logs.go:123] Gathering logs for container status ...
	I1227 21:02:50.789836    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 21:02:50.842866    1360 logs.go:123] Gathering logs for kubelet ...
	I1227 21:02:50.842866    1360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1227 21:02:50.915706    1360 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000193739s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 21:02:50.915706    1360 out.go:285] * 
	W1227 21:02:50.915706    1360 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000193739s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 21:02:50.915706    1360 out.go:285] * 
	W1227 21:02:50.916413    1360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 21:02:50.922522    1360 out.go:203] 
	W1227 21:02:50.926991    1360 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000193739s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 21:02:50.926991    1360 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 21:02:50.926991    1360 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 21:02:50.929002    1360 out.go:203] 
	
	
	==> Docker <==
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.355512495Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.355645710Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.355708318Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.355716318Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.355723519Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.355755823Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.355816030Z" level=info msg="Initializing buildkit"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.497078624Z" level=info msg="Completed buildkit initialization"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.508191521Z" level=info msg="Daemon has completed initialization"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.508474654Z" level=info msg="API listen on /run/docker.sock"
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.508478955Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 27 20:54:41 force-systemd-env-821200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 27 20:54:41 force-systemd-env-821200 dockerd[1193]: time="2025-12-27T20:54:41.508487756Z" level=info msg="API listen on [::]:2376"
	Dec 27 20:54:42 force-systemd-env-821200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Start docker client with request timeout 0s"
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Loaded network plugin cni"
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Setting cgroupDriver systemd"
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 27 20:54:42 force-systemd-env-821200 cri-dockerd[1487]: time="2025-12-27T20:54:42Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 27 20:54:42 force-systemd-env-821200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 21:02:53.780140   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:53.782270   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:53.783683   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:53.785031   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 21:02:53.787345   10505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +8.891818] CPU: 14 PID: 355236 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f297bc33b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f297bc33af6.
	[  +0.000001] RSP: 002b:00007ffc8dd977a0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.891579] CPU: 10 PID: 355378 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8af9f46b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8af9f46af6.
	[  +0.000002] RSP: 002b:00007ffdf8505b00 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000002] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000047] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +6.948415] tmpfs: Unknown parameter 'noswap'
	[  +8.584717] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 21:02:53 up  1:18,  0 user,  load average: 6.38, 6.74, 4.95
	Linux force-systemd-env-821200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 21:02:50 force-systemd-env-821200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 21:02:51 force-systemd-env-821200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 21:02:51 force-systemd-env-821200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 21:02:51 force-systemd-env-821200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 21:02:51 force-systemd-env-821200 kubelet[10311]: E1227 21:02:51.320897   10311 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 21:02:51 force-systemd-env-821200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 21:02:51 force-systemd-env-821200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 21:02:51 force-systemd-env-821200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 21:02:51 force-systemd-env-821200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 21:02:51 force-systemd-env-821200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 21:02:52 force-systemd-env-821200 kubelet[10353]: E1227 21:02:52.060281   10353 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 21:02:52 force-systemd-env-821200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 21:02:52 force-systemd-env-821200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 21:02:52 force-systemd-env-821200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 21:02:52 force-systemd-env-821200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 21:02:52 force-systemd-env-821200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 21:02:52 force-systemd-env-821200 kubelet[10385]: E1227 21:02:52.801890   10385 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 21:02:52 force-systemd-env-821200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 21:02:52 force-systemd-env-821200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 21:02:53 force-systemd-env-821200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 27 21:02:53 force-systemd-env-821200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 21:02:53 force-systemd-env-821200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 21:02:53 force-systemd-env-821200 kubelet[10453]: E1227 21:02:53.575856   10453 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 21:02:53 force-systemd-env-821200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 21:02:53 force-systemd-env-821200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-env-821200 -n force-systemd-env-821200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-env-821200 -n force-systemd-env-821200: exit status 6 (613.2281ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 21:02:55.402692    2180 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-821200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-821200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-821200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-821200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-821200: (3.1445845s)
--- FAIL: TestForceSystemdEnv (529.71s)

                                                
                                    
x
+
TestErrorSpam/setup (42.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-241800 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-241800 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 --driver=docker: (42.8937469s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-241800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22332
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-241800" primary control-plane node in "nospam-241800" cluster
* Pulling base image v0.0.48-1766570851-22316 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-241800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (42.89s)

                                                
                                    

Test pass (319/349)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.54
4 TestDownloadOnly/v1.28.0/preload-exists 0.04
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.81
9 TestDownloadOnly/v1.28.0/DeleteAll 1.19
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.46
12 TestDownloadOnly/v1.35.0/json-events 5.43
13 TestDownloadOnly/v1.35.0/preload-exists 0
16 TestDownloadOnly/v1.35.0/kubectl 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.21
18 TestDownloadOnly/v1.35.0/DeleteAll 1.09
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.55
20 TestDownloadOnlyKic 1.86
21 TestBinaryMirror 2.25
22 TestOffline 134.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.34
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.34
27 TestAddons/Setup 281.59
29 TestAddons/serial/Volcano 50.9
31 TestAddons/serial/GCPAuth/Namespaces 0.24
32 TestAddons/serial/GCPAuth/FakeCredentials 10.11
36 TestAddons/parallel/RegistryCreds 1.41
38 TestAddons/parallel/InspektorGadget 12.25
39 TestAddons/parallel/MetricsServer 7.69
41 TestAddons/parallel/CSI 48.05
42 TestAddons/parallel/Headlamp 29.34
43 TestAddons/parallel/CloudSpanner 7.3
44 TestAddons/parallel/LocalPath 23.25
45 TestAddons/parallel/NvidiaDevicePlugin 7.23
46 TestAddons/parallel/Yakd 12.88
47 TestAddons/parallel/AmdGpuDevicePlugin 7.13
48 TestAddons/StoppedEnableDisable 12.83
49 TestCertOptions 60.87
50 TestCertExpiration 274.3
51 TestDockerFlags 53.27
59 TestErrorSpam/start 2.47
60 TestErrorSpam/status 2.02
61 TestErrorSpam/pause 2.7
62 TestErrorSpam/unpause 2.51
63 TestErrorSpam/stop 19.13
66 TestFunctional/serial/CopySyncFile 0.03
67 TestFunctional/serial/StartWithProxy 71.93
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 49.55
70 TestFunctional/serial/KubeContext 0.09
71 TestFunctional/serial/KubectlGetPods 0.29
74 TestFunctional/serial/CacheCmd/cache/add_remote 10.09
75 TestFunctional/serial/CacheCmd/cache/add_local 4.21
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.21
77 TestFunctional/serial/CacheCmd/cache/list 0.19
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.58
79 TestFunctional/serial/CacheCmd/cache/cache_reload 4.45
80 TestFunctional/serial/CacheCmd/cache/delete 0.38
81 TestFunctional/serial/MinikubeKubectlCmd 0.37
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.84
83 TestFunctional/serial/ExtraConfig 49.08
84 TestFunctional/serial/ComponentHealth 0.13
85 TestFunctional/serial/LogsCmd 1.79
86 TestFunctional/serial/LogsFileCmd 1.82
87 TestFunctional/serial/InvalidService 5.47
89 TestFunctional/parallel/ConfigCmd 1.09
91 TestFunctional/parallel/DryRun 2
92 TestFunctional/parallel/InternationalLanguage 0.87
93 TestFunctional/parallel/StatusCmd 2.31
98 TestFunctional/parallel/AddonsCmd 0.4
99 TestFunctional/parallel/PersistentVolumeClaim 24.45
101 TestFunctional/parallel/SSHCmd 1.22
102 TestFunctional/parallel/CpCmd 3.42
103 TestFunctional/parallel/MySQL 76.02
104 TestFunctional/parallel/FileSync 0.63
105 TestFunctional/parallel/CertSync 3.52
109 TestFunctional/parallel/NodeLabels 0.13
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
113 TestFunctional/parallel/License 1.25
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.47
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.56
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.45
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.47
118 TestFunctional/parallel/ImageCommands/ImageBuild 9.07
119 TestFunctional/parallel/ImageCommands/Setup 1.63
120 TestFunctional/parallel/DockerEnv/powershell 5.89
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.76
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.4
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.32
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.36
125 TestFunctional/parallel/ServiceCmd/DeployApp 9.31
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.22
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.88
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.44
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.04
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.8
134 TestFunctional/parallel/ServiceCmd/List 0.77
135 TestFunctional/parallel/ImageCommands/ImageRemove 1.03
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.12
138 TestFunctional/parallel/ServiceCmd/HTTPS 15.02
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.86
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
146 TestFunctional/parallel/Version/short 0.18
147 TestFunctional/parallel/Version/components 1.24
148 TestFunctional/parallel/ProfileCmd/profile_not_create 1.14
149 TestFunctional/parallel/ProfileCmd/profile_list 1.19
150 TestFunctional/parallel/ProfileCmd/profile_json_output 1.37
151 TestFunctional/parallel/ServiceCmd/Format 15.01
152 TestFunctional/parallel/ServiceCmd/URL 15.01
153 TestFunctional/delete_echo-server_images 0.14
154 TestFunctional/delete_my-image_image 0.05
155 TestFunctional/delete_minikube_cached_images 0.05
160 TestMultiControlPlane/serial/StartCluster 204.57
161 TestMultiControlPlane/serial/DeployApp 9.05
162 TestMultiControlPlane/serial/PingHostFromPods 2.45
163 TestMultiControlPlane/serial/AddWorkerNode 54.58
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.98
166 TestMultiControlPlane/serial/CopyFile 32.94
167 TestMultiControlPlane/serial/StopSecondaryNode 13.36
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.54
169 TestMultiControlPlane/serial/RestartSecondaryNode 47.09
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.95
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 167.99
172 TestMultiControlPlane/serial/DeleteSecondaryNode 14.2
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.49
174 TestMultiControlPlane/serial/StopCluster 37.05
175 TestMultiControlPlane/serial/RestartCluster 76.67
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.44
177 TestMultiControlPlane/serial/AddSecondaryNode 79.16
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.93
181 TestImageBuild/serial/Setup 40.79
182 TestImageBuild/serial/NormalBuild 4.4
183 TestImageBuild/serial/BuildWithBuildArg 1.98
184 TestImageBuild/serial/BuildWithDockerIgnore 1.26
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.22
190 TestJSONOutput/start/Command 72.39
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 1.12
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.9
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 12.01
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.67
215 TestKicCustomNetwork/create_custom_network 48.67
216 TestKicCustomNetwork/use_default_bridge_network 48.41
217 TestKicExistingNetwork 49.35
218 TestKicCustomSubnet 49.35
219 TestKicStaticIP 50.04
220 TestMainNoArgs 0.16
221 TestMinikubeProfile 91.94
224 TestMountStart/serial/StartWithMountFirst 13.56
225 TestMountStart/serial/VerifyMountFirst 0.55
226 TestMountStart/serial/StartWithMountSecond 13.5
227 TestMountStart/serial/VerifyMountSecond 0.53
228 TestMountStart/serial/DeleteFirst 2.44
229 TestMountStart/serial/VerifyMountPostDelete 0.52
230 TestMountStart/serial/Stop 1.85
231 TestMountStart/serial/RestartStopped 10.76
232 TestMountStart/serial/VerifyMountPostStop 0.54
235 TestMultiNode/serial/FreshStart2Nodes 122.5
236 TestMultiNode/serial/DeployApp2Nodes 7.05
237 TestMultiNode/serial/PingHostFrom2Pods 1.73
238 TestMultiNode/serial/AddNode 53.42
239 TestMultiNode/serial/MultiNodeLabels 0.13
240 TestMultiNode/serial/ProfileList 1.37
241 TestMultiNode/serial/CopyFile 18.8
242 TestMultiNode/serial/StopNode 3.69
243 TestMultiNode/serial/StartAfterStop 13.23
244 TestMultiNode/serial/RestartKeepsNodes 78.49
245 TestMultiNode/serial/DeleteNode 8.32
246 TestMultiNode/serial/StopMultiNode 24.07
247 TestMultiNode/serial/RestartMultiNode 58.96
248 TestMultiNode/serial/ValidateNameConflict 46.75
253 TestScheduledStopWindows 112.54
257 TestInsufficientStorage 28.46
258 TestRunningBinaryUpgrade 379.86
260 TestKubernetesUpgrade 144.17
261 TestMissingContainerUpgrade 192.5
263 TestPause/serial/Start 129.31
264 TestPause/serial/SecondStartNoReconfiguration 57.53
265 TestPause/serial/Pause 1.16
266 TestPause/serial/VerifyStatus 0.64
267 TestPause/serial/Unpause 0.99
268 TestPause/serial/PauseAgain 1.41
269 TestPause/serial/DeletePaused 4.4
270 TestStoppedBinaryUpgrade/Setup 0.9
271 TestStoppedBinaryUpgrade/Upgrade 337.44
272 TestPause/serial/VerifyDeletedResources 1.06
280 TestPreload/Start-NoPreload-PullImage 100.84
281 TestPreload/Restart-With-Preload-Check-User-Image 48.43
295 TestNoKubernetes/serial/StartNoK8sWithVersion 0.24
296 TestNoKubernetes/serial/StartWithK8s 44.85
297 TestNoKubernetes/serial/StartWithStopK8s 20.91
298 TestNoKubernetes/serial/Start 13.94
299 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
300 TestNoKubernetes/serial/VerifyK8sNotRunning 0.59
301 TestNoKubernetes/serial/ProfileList 8.28
302 TestNoKubernetes/serial/Stop 1.9
303 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
304 TestNoKubernetes/serial/StartNoArgs 12.92
305 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.52
307 TestStartStop/group/old-k8s-version/serial/FirstStart 96.48
309 TestStartStop/group/no-preload/serial/FirstStart 94.99
310 TestStartStop/group/old-k8s-version/serial/DeployApp 9.65
311 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.58
312 TestStartStop/group/old-k8s-version/serial/Stop 12.18
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.5
314 TestStartStop/group/old-k8s-version/serial/SecondStart 30.73
315 TestStartStop/group/no-preload/serial/DeployApp 9.61
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.69
317 TestStartStop/group/no-preload/serial/Stop 12.33
318 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 25.01
319 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.5
320 TestStartStop/group/no-preload/serial/SecondStart 53.52
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.3
322 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.46
323 TestStartStop/group/old-k8s-version/serial/Pause 5.39
325 TestStartStop/group/embed-certs/serial/FirstStart 98.87
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.07
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.27
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.48
329 TestStartStop/group/no-preload/serial/Pause 5.33
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.31
333 TestStartStop/group/newest-cni/serial/FirstStart 57.68
334 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.93
336 TestStartStop/group/newest-cni/serial/Stop 12.23
337 TestStartStop/group/embed-certs/serial/DeployApp 8.59
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.51
339 TestStartStop/group/newest-cni/serial/SecondStart 23.78
340 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.59
341 TestStartStop/group/embed-certs/serial/Stop 12.36
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.61
343 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.58
344 TestStartStop/group/embed-certs/serial/SecondStart 30.28
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.58
349 TestStartStop/group/newest-cni/serial/Pause 5.37
350 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.39
351 TestPreload/PreloadSrc/gcs 6.59
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.55
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 65.69
354 TestPreload/PreloadSrc/github 8.85
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
356 TestPreload/PreloadSrc/gcs-cached 2.08
357 TestNetworkPlugins/group/auto/Start 93.78
358 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.07
359 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.54
360 TestStartStop/group/embed-certs/serial/Pause 5.77
361 TestNetworkPlugins/group/flannel/Start 73.02
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.38
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.47
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.19
366 TestNetworkPlugins/group/enable-default-cni/Start 84.57
367 TestNetworkPlugins/group/auto/KubeletFlags 0.57
368 TestNetworkPlugins/group/auto/NetCatPod 15.53
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.55
371 TestNetworkPlugins/group/flannel/NetCatPod 14.39
372 TestNetworkPlugins/group/auto/DNS 0.39
373 TestNetworkPlugins/group/auto/Localhost 0.28
374 TestNetworkPlugins/group/auto/HairPin 0.27
375 TestNetworkPlugins/group/flannel/DNS 0.25
376 TestNetworkPlugins/group/flannel/Localhost 0.21
377 TestNetworkPlugins/group/flannel/HairPin 0.23
378 TestNetworkPlugins/group/bridge/Start 87.98
379 TestNetworkPlugins/group/kubenet/Start 99.11
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.58
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 24.24
382 TestNetworkPlugins/group/custom-flannel/Start 71.65
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
386 TestNetworkPlugins/group/calico/Start 103.33
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.6
388 TestNetworkPlugins/group/bridge/NetCatPod 16.25
389 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.68
390 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.47
391 TestNetworkPlugins/group/kubenet/KubeletFlags 0.62
392 TestNetworkPlugins/group/kubenet/NetCatPod 15.63
393 TestNetworkPlugins/group/bridge/DNS 0.32
394 TestNetworkPlugins/group/bridge/Localhost 0.25
395 TestNetworkPlugins/group/bridge/HairPin 0.21
396 TestNetworkPlugins/group/custom-flannel/DNS 0.31
397 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
398 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
399 TestNetworkPlugins/group/kubenet/DNS 0.24
400 TestNetworkPlugins/group/kubenet/Localhost 0.21
401 TestNetworkPlugins/group/kubenet/HairPin 0.21
402 TestNetworkPlugins/group/false/Start 85.59
403 TestNetworkPlugins/group/kindnet/Start 72.63
404 TestNetworkPlugins/group/calico/ControllerPod 6.01
405 TestNetworkPlugins/group/calico/KubeletFlags 0.55
406 TestNetworkPlugins/group/calico/NetCatPod 14.53
407 TestNetworkPlugins/group/calico/DNS 0.26
408 TestNetworkPlugins/group/calico/Localhost 0.27
409 TestNetworkPlugins/group/calico/HairPin 0.23
410 TestNetworkPlugins/group/false/KubeletFlags 0.54
411 TestNetworkPlugins/group/false/NetCatPod 14.51
412 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
413 TestNetworkPlugins/group/kindnet/KubeletFlags 0.56
414 TestNetworkPlugins/group/kindnet/NetCatPod 18.47
415 TestNetworkPlugins/group/false/DNS 0.24
416 TestNetworkPlugins/group/false/Localhost 0.21
417 TestNetworkPlugins/group/false/HairPin 0.21
418 TestNetworkPlugins/group/kindnet/DNS 0.23
419 TestNetworkPlugins/group/kindnet/Localhost 0.21
420 TestNetworkPlugins/group/kindnet/HairPin 0.2
x
+
TestDownloadOnly/v1.28.0/json-events (6.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-730500 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-730500 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (6.5420368s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 19:55:34.430314   13656 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1227 19:55:34.474018   13656 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-730500
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-730500: exit status 85 (807.5944ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-730500 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-730500 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:28
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:27.957311    9848 out.go:360] Setting OutFile to fd 676 ...
	I1227 19:55:28.002862    9848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:28.002862    9848 out.go:374] Setting ErrFile to fd 680...
	I1227 19:55:28.002862    9848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1227 19:55:28.015725    9848 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1227 19:55:28.023155    9848 out.go:368] Setting JSON to true
	I1227 19:55:28.025436    9848 start.go:133] hostinfo: {"hostname":"minikube4","uptime":714,"bootTime":1766864613,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1227 19:55:28.025436    9848 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1227 19:55:28.038895    9848 out.go:99] [download-only-730500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	W1227 19:55:28.038895    9848 preload.go:372] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1227 19:55:28.038895    9848 notify.go:221] Checking for updates...
	I1227 19:55:28.040906    9848 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 19:55:28.042898    9848 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1227 19:55:28.044897    9848 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:28.046897    9848 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1227 19:55:28.050900    9848 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:55:28.051895    9848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:28.257532    9848 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1227 19:55:28.261248    9848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:28.942702    9848 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-27 19:55:28.921269667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 19:55:28.946866    9848 out.go:99] Using the docker driver based on user configuration
	I1227 19:55:28.946946    9848 start.go:309] selected driver: docker
	I1227 19:55:28.946986    9848 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:28.952993    9848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:29.210378    9848 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-27 19:55:29.190511613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 19:55:29.210670    9848 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:29.211780    9848 start_flags.go:417] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1227 19:55:29.211994    9848 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:55:29.229658    9848 out.go:171] Using Docker Desktop driver with root privileges
	I1227 19:55:29.233407    9848 cni.go:84] Creating CNI manager for ""
	I1227 19:55:29.233487    9848 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 19:55:29.233487    9848 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 19:55:29.233693    9848 start.go:353] cluster config:
	{Name:download-only-730500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-730500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:55:29.236196    9848 out.go:99] Starting "download-only-730500" primary control-plane node in "download-only-730500" cluster
	I1227 19:55:29.236274    9848 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 19:55:29.238633    9848 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 19:55:29.238633    9848 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1227 19:55:29.238633    9848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 19:55:29.281688    9848 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1227 19:55:29.281768    9848 cache.go:65] Caching tarball of preloaded images
	I1227 19:55:29.282298    9848 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1227 19:55:29.285014    9848 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 19:55:29.285103    9848 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1227 19:55:29.285131    9848 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1227 19:55:29.295470    9848 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:29.295470    9848 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1766570851-22316@sha256_7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a.tar
	I1227 19:55:29.295470    9848 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1766570851-22316@sha256_7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a.tar
	I1227 19:55:29.295470    9848 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 19:55:29.297531    9848 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:29.354112    9848 preload.go:313] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1227 19:55:29.354304    9848 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-730500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-730500"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (1.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1929964s)
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (1.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-730500
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (5.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-588300 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-588300 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker: (5.4324108s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (5.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 19:55:42.374034   13656 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 19:55:42.374034   13656 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
--- PASS: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-588300
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-588300: exit status 85 (209.4543ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-730500 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-730500 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-730500                                                                                                                           │ download-only-730500 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ -o=json --download-only -p download-only-588300 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker │ download-only-588300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:37.012255    4348 out.go:360] Setting OutFile to fd 772 ...
	I1227 19:55:37.055526    4348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:37.055526    4348 out.go:374] Setting ErrFile to fd 812...
	I1227 19:55:37.055526    4348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:37.071530    4348 out.go:368] Setting JSON to true
	I1227 19:55:37.073533    4348 start.go:133] hostinfo: {"hostname":"minikube4","uptime":723,"bootTime":1766864613,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1227 19:55:37.073533    4348 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1227 19:55:37.079536    4348 out.go:99] [download-only-588300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1227 19:55:37.079764    4348 notify.go:221] Checking for updates...
	I1227 19:55:37.081890    4348 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 19:55:37.084819    4348 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1227 19:55:37.087042    4348 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:37.089828    4348 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1227 19:55:37.094178    4348 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:55:37.095195    4348 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:37.209958    4348 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1227 19:55:37.213217    4348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:37.520027    4348 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-27 19:55:37.501415282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 19:55:37.918789    4348 out.go:99] Using the docker driver based on user configuration
	I1227 19:55:37.919117    4348 start.go:309] selected driver: docker
	I1227 19:55:37.919117    4348 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:37.926532    4348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:38.152710    4348 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-27 19:55:38.135058866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 19:55:38.152710    4348 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:38.153852    4348 start_flags.go:417] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1227 19:55:38.154044    4348 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:55:38.158109    4348 out.go:171] Using Docker Desktop driver with root privileges
	I1227 19:55:38.161414    4348 cni.go:84] Creating CNI manager for ""
	I1227 19:55:38.161414    4348 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 19:55:38.161414    4348 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 19:55:38.161414    4348 start.go:353] cluster config:
	{Name:download-only-588300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:download-only-588300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:55:38.164428    4348 out.go:99] Starting "download-only-588300" primary control-plane node in "download-only-588300" cluster
	I1227 19:55:38.164428    4348 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 19:55:38.169411    4348 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 19:55:38.169411    4348 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 19:55:38.169411    4348 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 19:55:38.204491    4348 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 19:55:38.204491    4348 cache.go:65] Caching tarball of preloaded images
	I1227 19:55:38.205048    4348 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 19:55:38.206835    4348 out.go:99] Downloading Kubernetes v1.35.0 preload ...
	I1227 19:55:38.207380    4348 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 19:55:38.207404    4348 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1227 19:55:38.224261    4348 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:38.224549    4348 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1766570851-22316@sha256_7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a.tar
	I1227 19:55:38.224549    4348 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1766570851-22316@sha256_7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a.tar
	I1227 19:55:38.224549    4348 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 19:55:38.224549    4348 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory, skipping pull
	I1227 19:55:38.224549    4348 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in cache, skipping pull
	I1227 19:55:38.224549    4348 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	I1227 19:55:38.276990    4348 preload.go:313] Got checksum from GCS API "c0024de4eb9cf719bc0d5996878f94c1"
	I1227 19:55:38.277195    4348 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4?checksum=md5:c0024de4eb9cf719bc0d5996878f94c1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1227 19:55:41.267809    4348 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 19:55:41.268213    4348 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-588300\config.json ...
	I1227 19:55:41.268603    4348 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-588300\config.json: {Name:mkdd7278a0444c50d3d28ab69246bd6a735a7e2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:41.268819    4348 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 19:55:41.269953    4348 download.go:114] Downloading: https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.35.0/kubectl.exe
	
	
	* The control-plane node download-only-588300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-588300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (1.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0926101s)
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (1.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-588300
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.55s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.86s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-990000 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-990000 --alsologtostderr --driver=docker: (1.122571s)
helpers_test.go:176: Cleaning up "download-docker-990000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-990000
--- PASS: TestDownloadOnlyKic (1.86s)

                                                
                                    
x
+
TestBinaryMirror (2.25s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 19:55:47.135997   13656 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-125900 --alsologtostderr --binary-mirror http://127.0.0.1:56124 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-125900 --alsologtostderr --binary-mirror http://127.0.0.1:56124 --driver=docker: (1.4972092s)
helpers_test.go:176: Cleaning up "binary-mirror-125900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-125900
--- PASS: TestBinaryMirror (2.25s)

                                                
                                    
x
+
TestOffline (134.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-637800 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-637800 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (2m10.7209257s)
helpers_test.go:176: Cleaning up "offline-docker-637800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-637800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-637800: (3.8982083s)
--- PASS: TestOffline (134.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.34s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-192900
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-192900: exit status 85 (344.7519ms)

                                                
                                                
-- stdout --
	* Profile "addons-192900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-192900"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.34s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-192900
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-192900: exit status 85 (341.6987ms)

                                                
                                                
-- stdout --
	* Profile "addons-192900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-192900"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.34s)

                                                
                                    
x
+
TestAddons/Setup (281.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-192900 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-192900 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m41.585999s)
--- PASS: TestAddons/Setup (281.59s)

                                                
                                    
x
+
TestAddons/serial/Volcano (50.9s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 16.6667ms
addons_test.go:878: volcano-admission stabilized in 16.6667ms
addons_test.go:886: volcano-controller stabilized in 16.6667ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-6c7b5cd66b-29hc5" [012b0513-16da-4f36-a5ce-a1b38aa0216b] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0185472s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-7f4844c49c-xqjsk" [f8604fa0-1f03-45d9-82e2-38266a7c47cc] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0061312s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-8f57bcd69-rhpv7" [2b8d1493-6c0d-416a-af5f-3cfdd627a99a] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0078255s
addons_test.go:905: (dbg) Run:  kubectl --context addons-192900 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-192900 create -f testdata\vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-192900 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [0f1af203-1bba-4897-bfe6-3e03f2189f0b] Pending
helpers_test.go:353: "test-job-nginx-0" [0f1af203-1bba-4897-bfe6-3e03f2189f0b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [0f1af203-1bba-4897-bfe6-3e03f2189f0b] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 22.0065343s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable volcano --alsologtostderr -v=1: (12.0651299s)
--- PASS: TestAddons/serial/Volcano (50.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-192900 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-192900 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-192900 create -f testdata\busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-192900 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6d986db6-dda0-4d8f-8e70-f5b27733834b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6d986db6-dda0-4d8f-8e70-f5b27733834b] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.0069029s
addons_test.go:696: (dbg) Run:  kubectl --context addons-192900 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-192900 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-192900 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-192900 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.11s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 57.0057ms
addons_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-192900
addons_test.go:334: (dbg) Run:  kubectl --context addons-192900 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-7zbcz" [b4185d45-91f0-484a-b7ba-3eac4efc08c9] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0058771s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable inspektor-gadget --alsologtostderr -v=1: (6.2460154s)
--- PASS: TestAddons/parallel/InspektorGadget (12.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 164.996ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-928kv" [70dab4fc-8134-4a33-9da1-3dc5045b7062] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0066378s
addons_test.go:465: (dbg) Run:  kubectl --context addons-192900 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable metrics-server --alsologtostderr -v=1: (1.3748319s)
--- PASS: TestAddons/parallel/MetricsServer (7.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 20:02:05.670529   13656 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 20:02:05.730675   13656 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 20:02:05.731086   13656 kapi.go:107] duration metric: took 60.5571ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 60.6223ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-192900 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-192900 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [86659449-07f2-43ab-b90b-9f3f93ac6d13] Pending
helpers_test.go:353: "task-pv-pod" [86659449-07f2-43ab-b90b-9f3f93ac6d13] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [86659449-07f2-43ab-b90b-9f3f93ac6d13] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.0060267s
addons_test.go:574: (dbg) Run:  kubectl --context addons-192900 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-192900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-192900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-192900 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-192900 delete pod task-pv-pod: (1.6053456s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-192900 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-192900 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-192900 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [8ce0a9de-86e3-4b24-a047-002fbe32f336] Pending
helpers_test.go:353: "task-pv-pod-restore" [8ce0a9de-86e3-4b24-a047-002fbe32f336] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [8ce0a9de-86e3-4b24-a047-002fbe32f336] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005962s
addons_test.go:616: (dbg) Run:  kubectl --context addons-192900 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-192900 delete pod task-pv-pod-restore: (1.346956s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-192900 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-192900 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable volumesnapshots --alsologtostderr -v=1: (1.2214741s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.2323642s)
--- PASS: TestAddons/parallel/CSI (48.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (29.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-192900 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-192900 --alsologtostderr -v=1: (1.7692014s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-gr94s" [b447d99b-65e3-49e0-b0cb-a8cca8a317b7] Pending
helpers_test.go:353: "headlamp-6d8d595f-gr94s" [b447d99b-65e3-49e0-b0cb-a8cca8a317b7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-gr94s" [b447d99b-65e3-49e0-b0cb-a8cca8a317b7] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 20.0328649s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable headlamp --alsologtostderr -v=1: (7.5322948s)
--- PASS: TestAddons/parallel/Headlamp (29.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-pzxb2" [5112bc9b-9b29-494d-93bb-37881205e7b8] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0055223s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable cloud-spanner --alsologtostderr -v=1: (2.2868295s)
--- PASS: TestAddons/parallel/CloudSpanner (7.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (23.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-192900 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-192900 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-192900 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [3657b1de-f42a-41fa-a64a-37cfd883b2c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [3657b1de-f42a-41fa-a64a-37cfd883b2c4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [3657b1de-f42a-41fa-a64a-37cfd883b2c4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 14.0064519s
addons_test.go:969: (dbg) Run:  kubectl --context addons-192900 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 ssh "cat /opt/local-path-provisioner/pvc-1dd1c4cf-525d-4b1c-a2e1-bc649d27c87c_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-192900 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-192900 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1.0304077s)
--- PASS: TestAddons/parallel/LocalPath (23.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-4vggv" [f7dbf5da-9fac-4885-8c9e-a3416c84cfd4] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0067973s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.2200057s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-865bfb49b9-qp4n6" [d1e31baa-1cec-4fdf-bf80-69d9a285edff] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0061694s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable yakd --alsologtostderr -v=1: (6.8712451s)
--- PASS: TestAddons/parallel/Yakd (12.88s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (7.13s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-xrshx" [918bac8b-a018-44ed-a32a-934fd3e18d1b] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.0055187s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.1256107s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (7.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.83s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-192900
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-192900: (12.0319102s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-192900
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-192900
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-192900
--- PASS: TestAddons/StoppedEnableDisable (12.83s)

                                                
                                    
x
+
TestCertOptions (60.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-955700 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-955700 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (55.4463634s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-955700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1227 20:54:37.864679   13656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-955700
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-955700 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-955700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-955700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-955700: (4.0498775s)
--- PASS: TestCertOptions (60.87s)

                                                
                                    
x
+
TestCertExpiration (274.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-978000 --memory=3072 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-978000 --memory=3072 --cert-expiration=3m --driver=docker: (54.3420708s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-978000 --memory=3072 --cert-expiration=8760h --driver=docker
E1227 20:57:51.788523   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-978000 --memory=3072 --cert-expiration=8760h --driver=docker: (35.4491734s)
helpers_test.go:176: Cleaning up "cert-expiration-978000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-978000
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-978000: (4.5096098s)
--- PASS: TestCertExpiration (274.30s)

                                                
                                    
x
+
TestDockerFlags (53.27s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-385200 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-385200 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (48.317905s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-385200 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-385200 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-385200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-385200
E1227 20:55:14.721673   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-385200: (3.7776272s)
--- PASS: TestDockerFlags (53.27s)

                                                
                                    
x
+
TestErrorSpam/start (2.47s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 start --dry-run
--- PASS: TestErrorSpam/start (2.47s)

                                                
                                    
x
+
TestErrorSpam/status (2.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 status
--- PASS: TestErrorSpam/status (2.02s)

                                                
                                    
x
+
TestErrorSpam/pause (2.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 pause: (1.2890513s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 pause
--- PASS: TestErrorSpam/pause (2.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 unpause
--- PASS: TestErrorSpam/unpause (2.51s)

                                                
                                    
x
+
TestErrorSpam/stop (19.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 stop: (11.8976181s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 stop: (3.6586053s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-241800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-241800 stop: (3.5728706s)
--- PASS: TestErrorSpam/stop (19.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13656\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (71.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-052200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1227 20:05:31.647317   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:31.652478   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:31.662962   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:31.683744   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:31.724070   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:31.804865   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:31.965426   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:32.285939   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:32.926431   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:34.207271   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:36.767927   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-052200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m11.9205459s)
--- PASS: TestFunctional/serial/StartWithProxy (71.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (49.55s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 20:05:36.857749   13656 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-052200 --alsologtostderr -v=8
E1227 20:05:41.888998   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:52.130005   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:06:12.610853   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-052200 --alsologtostderr -v=8: (49.5435793s)
functional_test.go:678: soft start took 49.5467279s for "functional-052200" cluster.
I1227 20:06:26.402728   13656 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (49.55s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-052200 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 cache add registry.k8s.io/pause:3.1: (3.8938316s)
functional_test.go:1069: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 cache add registry.k8s.io/pause:3.3: (3.0472065s)
functional_test.go:1069: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 cache add registry.k8s.io/pause:latest: (3.1469834s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-052200 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local213288238\001
functional_test.go:1097: (dbg) Done: docker build -t minikube-local-cache-test:functional-052200 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local213288238\001: (1.3567847s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cache add minikube-local-cache-test:functional-052200
functional_test.go:1109: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 cache add minikube-local-cache-test:functional-052200: (2.5934273s)
functional_test.go:1114: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cache delete minikube-local-cache-test:functional-052200
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-052200
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-052200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (560.2114ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cache reload
functional_test.go:1178: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 cache reload: (2.7372706s)
functional_test.go:1183: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 kubectl -- --context functional-052200 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-052200 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.84s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-052200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1227 20:06:53.571595   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-052200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.0821166s)
functional_test.go:776: restart took 49.0821565s for "functional-052200" cluster.
I1227 20:07:38.184000   13656 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (49.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-052200 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 logs
functional_test.go:1256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 logs: (1.7946248s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3925562108\001\logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3925562108\001\logs.txt: (1.8083754s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-052200 apply -f testdata\invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-052200
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-052200: exit status 115 (1.0448516s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32755 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-052200 delete -f testdata\invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-052200 delete -f testdata\invalidsvc.yaml: (1.0598136s)
--- PASS: TestFunctional/serial/InvalidService (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-052200 config get cpus: exit status 14 (174.0037ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-052200 config get cpus: exit status 14 (143.0172ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-052200 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-052200 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (764.951ms)

                                                
                                                
-- stdout --
	* [functional-052200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:08:18.641076    4628 out.go:360] Setting OutFile to fd 2044 ...
	I1227 20:08:18.689077    4628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:08:18.689077    4628 out.go:374] Setting ErrFile to fd 1360...
	I1227 20:08:18.689077    4628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:08:18.710081    4628 out.go:368] Setting JSON to false
	I1227 20:08:18.716094    4628 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1485,"bootTime":1766864613,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1227 20:08:18.716094    4628 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1227 20:08:18.720082    4628 out.go:179] * [functional-052200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1227 20:08:18.726086    4628 notify.go:221] Checking for updates...
	I1227 20:08:18.728082    4628 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 20:08:18.731090    4628 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:08:18.733087    4628 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1227 20:08:18.736094    4628 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:08:18.738103    4628 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:08:18.741091    4628 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:08:18.742083    4628 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:08:18.890106    4628 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1227 20:08:18.893079    4628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:08:19.237276    4628 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:84 SystemTime:2025-12-27 20:08:19.213843662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:08:19.241267    4628 out.go:179] * Using the docker driver based on existing profile
	I1227 20:08:19.243265    4628 start.go:309] selected driver: docker
	I1227 20:08:19.243265    4628 start.go:928] validating driver "docker" against &{Name:functional-052200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-052200 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:08:19.243265    4628 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:08:19.247273    4628 out.go:203] 
	W1227 20:08:19.255270    4628 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 20:08:19.257270    4628 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-052200 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:1011: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-052200 --dry-run --alsologtostderr -v=1 --driver=docker: (1.2295628s)
--- PASS: TestFunctional/parallel/DryRun (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-052200 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-052200 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (873.9315ms)

                                                
                                                
-- stdout --
	* [functional-052200] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:08:18.918082   13036 out.go:360] Setting OutFile to fd 1360 ...
	I1227 20:08:19.039275   13036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:08:19.039275   13036 out.go:374] Setting ErrFile to fd 1412...
	I1227 20:08:19.039275   13036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:08:19.078280   13036 out.go:368] Setting JSON to false
	I1227 20:08:19.083269   13036 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1485,"bootTime":1766864613,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1227 20:08:19.083269   13036 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1227 20:08:19.086272   13036 out.go:179] * [functional-052200] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 22H2
	I1227 20:08:19.088268   13036 notify.go:221] Checking for updates...
	I1227 20:08:19.090265   13036 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1227 20:08:19.092264   13036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:08:19.096272   13036 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1227 20:08:19.099263   13036 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:08:19.102271   13036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:08:19.108303   13036 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:08:19.109282   13036 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:08:19.281267   13036 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1227 20:08:19.286276   13036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:08:19.632979   13036 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:84 SystemTime:2025-12-27 20:08:19.594193047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1227 20:08:19.635970   13036 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 20:08:19.639968   13036 start.go:309] selected driver: docker
	I1227 20:08:19.639968   13036 start.go:928] validating driver "docker" against &{Name:functional-052200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-052200 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:08:19.639968   13036 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:08:19.641969   13036 out.go:203] 
	W1227 20:08:19.643976   13036 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 20:08:19.645972   13036 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e1fd3cb0-9f3a-441f-ba82-be908907e4f2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0122012s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-052200 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-052200 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-052200 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-052200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [dfb835e7-e0f6-49d4-a6df-7ebc7e4bed53] Pending
helpers_test.go:353: "sp-pod" [dfb835e7-e0f6-49d4-a6df-7ebc7e4bed53] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [dfb835e7-e0f6-49d4-a6df-7ebc7e4bed53] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0062643s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-052200 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-052200 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-052200 delete -f testdata/storage-provisioner/pod.yaml: (1.5175679s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-052200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [527a1fd3-0d01-46fc-97fb-56f5df0e1f15] Pending
helpers_test.go:353: "sp-pod" [527a1fd3-0d01-46fc-97fb-56f5df0e1f15] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [527a1fd3-0d01-46fc-97fb-56f5df0e1f15] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0419046s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-052200 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh -n functional-052200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cp functional-052200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd194098732\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh -n functional-052200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh -n functional-052200 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (76.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-052200 replace --force -f testdata\mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-84xn9" [2061a283-6268-42d3-bb0b-f30020f8024e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-84xn9" [2061a283-6268-42d3-bb0b-f30020f8024e] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 57.0452154s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;": exit status 1 (199.872ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1227 20:09:06.332186   13656 retry.go:84] will retry after 1.2s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;": exit status 1 (196.3715ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;": exit status 1 (198.6408ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;": exit status 1 (209.7897ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;": exit status 1 (202.2688ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1227 20:09:15.738371   13656 retry.go:84] will retry after 3.2s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;": exit status 1 (198.2515ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1227 20:09:19.119351   13656 retry.go:84] will retry after 5.4s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-052200 exec mysql-7d7b65bc95-84xn9 -- mysql -ppassword -e "show databases;"
E1227 20:10:31.649326   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:10:59.335111   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (76.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/13656/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo cat /etc/test/nested/copy/13656/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/13656.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo cat /etc/ssl/certs/13656.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/13656.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo cat /usr/share/ca-certificates/13656.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/136562.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo cat /etc/ssl/certs/136562.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/136562.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo cat /usr/share/ca-certificates/136562.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-052200 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-052200 ssh "sudo systemctl is-active crio": exit status 1 (572.3821ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2298: (dbg) Done: out/minikube-windows-amd64.exe license: (1.2395906s)
--- PASS: TestFunctional/parallel/License (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-052200 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-052200
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-052200 image ls --format short --alsologtostderr:
I1227 20:08:21.695445   10112 out.go:360] Setting OutFile to fd 1632 ...
I1227 20:08:21.738365   10112 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:21.738365   10112 out.go:374] Setting ErrFile to fd 1948...
I1227 20:08:21.738889   10112 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:21.750539   10112 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:21.750539   10112 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:21.758148   10112 cli_runner.go:164] Run: docker container inspect functional-052200 --format={{.State.Status}}
I1227 20:08:21.824080   10112 ssh_runner.go:195] Run: systemctl --version
I1227 20:08:21.828079   10112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052200
I1227 20:08:21.881068   10112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56976 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-052200\id_rsa Username:docker}
I1227 20:08:22.020367   10112 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-052200 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ localhost/my-image                                │ functional-052200 │ acd7028ae171c │ 1.24MB │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 04da2b0513cd7 │ 53.7MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ 5c6acd67e9cd1 │ 89.8MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 0a108f7189562 │ 62.5MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                             │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                             │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test       │ functional-052200 │ 8be76c49cbee5 │ 30B    │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-052200 │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                             │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 2c9a4b058bd7e │ 75.8MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ 32652ff1bbe6b │ 70.7MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ 550794e3b12ac │ 51.7MB │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-052200 image ls --format table --alsologtostderr:
I1227 20:08:32.148896    2196 out.go:360] Setting OutFile to fd 1672 ...
I1227 20:08:32.193448    2196 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:32.193448    2196 out.go:374] Setting ErrFile to fd 1896...
I1227 20:08:32.193448    2196 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:32.205479    2196 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:32.205479    2196 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:32.212457    2196 cli_runner.go:164] Run: docker container inspect functional-052200 --format={{.State.Status}}
I1227 20:08:32.286012    2196 ssh_runner.go:195] Run: systemctl --version
I1227 20:08:32.288834    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052200
I1227 20:08:32.344959    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56976 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-052200\id_rsa Username:docker}
I1227 20:08:32.508380    2196 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-052200 image ls --format json --alsologtostderr:
[{"id":"acd7028ae171c68edd42fda00350e1a9d113801344ed94ffc8da6db6dec21b87","repoDigests":[],"repoTags":["localhost/my-image:functional-052200"],"size":"1240000"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"70700000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"8be76c49cbee5a8cb4e29a4264dd9c17c377460c8acce939fa3f08958a91ca7f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-052200"],"size":"30"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"89800000"},{"id":"550794e3b12ac21ec7fd940bdfb4
5f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"51700000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"75800000"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"62500000"},{"id":"aa5e3ebc
0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-052200 image ls --format json --alsologtostderr:
I1227 20:08:31.699298   13608 out.go:360] Setting OutFile to fd 1656 ...
I1227 20:08:31.741856   13608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:31.741856   13608 out.go:374] Setting ErrFile to fd 1760...
I1227 20:08:31.741856   13608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:31.754799   13608 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:31.754850   13608 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:31.764392   13608 cli_runner.go:164] Run: docker container inspect functional-052200 --format={{.State.Status}}
I1227 20:08:31.826667   13608 ssh_runner.go:195] Run: systemctl --version
I1227 20:08:31.829669   13608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052200
I1227 20:08:31.881662   13608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56976 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-052200\id_rsa Username:docker}
I1227 20:08:32.009353   13608 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-052200 image ls --format yaml --alsologtostderr:
- id: 8be76c49cbee5a8cb4e29a4264dd9c17c377460c8acce939fa3f08958a91ca7f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-052200
size: "30"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "75800000"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "51700000"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "62500000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "89800000"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "70700000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-052200 image ls --format yaml --alsologtostderr:
I1227 20:08:22.166353    8160 out.go:360] Setting OutFile to fd 1508 ...
I1227 20:08:22.210091    8160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:22.210091    8160 out.go:374] Setting ErrFile to fd 1392...
I1227 20:08:22.210091    8160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:22.222026    8160 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:22.223034    8160 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:22.233019    8160 cli_runner.go:164] Run: docker container inspect functional-052200 --format={{.State.Status}}
I1227 20:08:22.289376    8160 ssh_runner.go:195] Run: systemctl --version
I1227 20:08:22.292378    8160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052200
I1227 20:08:22.348802    8160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56976 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-052200\id_rsa Username:docker}
I1227 20:08:22.467184    8160 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-052200 ssh pgrep buildkitd: exit status 1 (528.0499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image build -t localhost/my-image:functional-052200 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 image build -t localhost/my-image:functional-052200 testdata\build --alsologtostderr: (8.0428784s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-052200 image build -t localhost/my-image:functional-052200 testdata\build --alsologtostderr:
I1227 20:08:23.161410    4408 out.go:360] Setting OutFile to fd 1496 ...
I1227 20:08:23.222895    4408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:23.222895    4408 out.go:374] Setting ErrFile to fd 1952...
I1227 20:08:23.222895    4408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:08:23.236556    4408 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:23.258226    4408 config.go:182] Loaded profile config "functional-052200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:08:23.265847    4408 cli_runner.go:164] Run: docker container inspect functional-052200 --format={{.State.Status}}
I1227 20:08:23.325574    4408 ssh_runner.go:195] Run: systemctl --version
I1227 20:08:23.328578    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-052200
I1227 20:08:23.380574    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56976 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-052200\id_rsa Username:docker}
I1227 20:08:23.502596    4408 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3045596842.tar
I1227 20:08:23.507226    4408 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 20:08:23.526648    4408 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3045596842.tar
I1227 20:08:23.539549    4408 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3045596842.tar: stat -c "%s %y" /var/lib/minikube/build/build.3045596842.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3045596842.tar': No such file or directory
I1227 20:08:23.539549    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3045596842.tar --> /var/lib/minikube/build/build.3045596842.tar (3072 bytes)
I1227 20:08:23.571194    4408 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3045596842
I1227 20:08:23.599715    4408 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3045596842 -xf /var/lib/minikube/build/build.3045596842.tar
I1227 20:08:23.617106    4408 docker.go:364] Building image: /var/lib/minikube/build/build.3045596842
I1227 20:08:23.620097    4408 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-052200 /var/lib/minikube/build/build.3045596842
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B 0.0s done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 4.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 writing image sha256:acd7028ae171c68edd42fda00350e1a9d113801344ed94ffc8da6db6dec21b87
#8 writing image sha256:acd7028ae171c68edd42fda00350e1a9d113801344ed94ffc8da6db6dec21b87 0.0s done
#8 naming to localhost/my-image:functional-052200 0.0s done
#8 DONE 0.2s
I1227 20:08:31.024486    4408 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-052200 /var/lib/minikube/build/build.3045596842: (7.404343s)
I1227 20:08:31.029124    4408 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3045596842
I1227 20:08:31.048177    4408 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3045596842.tar
I1227 20:08:31.102332    4408 build_images.go:218] Built localhost/my-image:functional-052200 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3045596842.tar
I1227 20:08:31.102332    4408 build_images.go:134] succeeded building to: functional-052200
I1227 20:08:31.102332    4408 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0: (1.5057778s)
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (5.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-052200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-052200"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-052200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-052200": (3.4767422s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-052200 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-052200 docker-env | Invoke-Expression ; docker images": (2.4070449s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (5.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 --alsologtostderr: (3.2611809s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-052200 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-052200 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-8qtss" [c771cea9-6d05-4ba4-8fbf-66c79566ef5a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-8qtss" [c771cea9-6d05-4ba4-8fbf-66c79566ef5a] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.0088535s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 --alsologtostderr: (2.77386s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-052200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-052200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-052200 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 11724: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 10536: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-052200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-052200 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-052200 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [3ab2f3ce-7e35-4e7e-8036-60567ea39207] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [3ab2f3ce-7e35-4e7e-8036-60567ea39207] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.0076476s
I1227 20:08:07.165004   13656 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 --alsologtostderr: (2.7220707s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 service list -o json
functional_test.go:1509: Took "660.647ms" to run "out/minikube-windows-amd64.exe -p functional-052200 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 service --namespace=default --https --url hello-node
functional_test.go:1524: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-052200 service --namespace=default --https --url hello-node: exit status 1 (15.0150084s)

                                                
                                                
-- stdout --
	https://127.0.0.1:57433

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1537: found endpoint: https://127.0.0.1:57433
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-052200 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-052200 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 4328: TerminateProcess: Access is denied.
helpers_test.go:526: unable to kill pid 14008: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-windows-amd64.exe -p functional-052200 version -o=json --components: (1.2423797s)
--- PASS: TestFunctional/parallel/Version/components (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E1227 20:08:15.492776   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1335: Took "982.289ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1349: Took "204.2782ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.1639536s)
functional_test.go:1386: Took "1.1649674s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1399: Took "206.9913ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 service hello-node --url --format={{.IP}}
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-052200 service hello-node --url --format={{.IP}}: exit status 1 (15.0142787s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-052200 service hello-node --url
functional_test.go:1574: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-052200 service hello-node --url: exit status 1 (15.0132868s)

                                                
                                                
-- stdout --
	http://127.0.0.1:57522

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1580: found endpoint for hello-node: http://127.0.0.1:57522
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-052200
--- PASS: TestFunctional/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-052200
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-052200
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1227 20:15:31.650797   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (3m23.023896s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5: (1.5499167s)
--- PASS: TestMultiControlPlane/serial/StartCluster (204.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 kubectl -- rollout status deployment/busybox: (4.1175693s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-6sfd5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-7zkfl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-sxrhs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-6sfd5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-7zkfl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-sxrhs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-6sfd5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-7zkfl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-sxrhs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-6sfd5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-6sfd5 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-7zkfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-7zkfl -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-sxrhs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 kubectl -- exec busybox-769dd8b7dd-sxrhs -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 node add --alsologtostderr -v 5
E1227 20:17:51.763999   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:51.769410   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:51.780136   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:51.800761   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:51.841421   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:51.921617   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:52.082256   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:52.402348   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:53.043485   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 node add --alsologtostderr -v 5: (52.7065836s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5
E1227 20:17:54.324191   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5: (1.8704024s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-668100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E1227 20:17:56.884290   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9756863s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (32.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 status --output json --alsologtostderr -v 5: (1.8512021s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp testdata\cp-test.txt ha-668100:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1002559731\001\cp-test_ha-668100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100:/home/docker/cp-test.txt ha-668100-m02:/home/docker/cp-test_ha-668100_ha-668100-m02.txt
E1227 20:18:02.004945   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m02 "sudo cat /home/docker/cp-test_ha-668100_ha-668100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100:/home/docker/cp-test.txt ha-668100-m03:/home/docker/cp-test_ha-668100_ha-668100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m03 "sudo cat /home/docker/cp-test_ha-668100_ha-668100-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100:/home/docker/cp-test.txt ha-668100-m04:/home/docker/cp-test_ha-668100_ha-668100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m04 "sudo cat /home/docker/cp-test_ha-668100_ha-668100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp testdata\cp-test.txt ha-668100-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1002559731\001\cp-test_ha-668100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m02:/home/docker/cp-test.txt ha-668100:/home/docker/cp-test_ha-668100-m02_ha-668100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100 "sudo cat /home/docker/cp-test_ha-668100-m02_ha-668100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m02:/home/docker/cp-test.txt ha-668100-m03:/home/docker/cp-test_ha-668100-m02_ha-668100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m02 "sudo cat /home/docker/cp-test.txt"
E1227 20:18:12.245700   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m03 "sudo cat /home/docker/cp-test_ha-668100-m02_ha-668100-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m02:/home/docker/cp-test.txt ha-668100-m04:/home/docker/cp-test_ha-668100-m02_ha-668100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m04 "sudo cat /home/docker/cp-test_ha-668100-m02_ha-668100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp testdata\cp-test.txt ha-668100-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1002559731\001\cp-test_ha-668100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m03:/home/docker/cp-test.txt ha-668100:/home/docker/cp-test_ha-668100-m03_ha-668100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100 "sudo cat /home/docker/cp-test_ha-668100-m03_ha-668100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m03:/home/docker/cp-test.txt ha-668100-m02:/home/docker/cp-test_ha-668100-m03_ha-668100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m02 "sudo cat /home/docker/cp-test_ha-668100-m03_ha-668100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m03:/home/docker/cp-test.txt ha-668100-m04:/home/docker/cp-test_ha-668100-m03_ha-668100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m04 "sudo cat /home/docker/cp-test_ha-668100-m03_ha-668100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp testdata\cp-test.txt ha-668100-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1002559731\001\cp-test_ha-668100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m04:/home/docker/cp-test.txt ha-668100:/home/docker/cp-test_ha-668100-m04_ha-668100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100 "sudo cat /home/docker/cp-test_ha-668100-m04_ha-668100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m04:/home/docker/cp-test.txt ha-668100-m02:/home/docker/cp-test_ha-668100-m04_ha-668100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m02 "sudo cat /home/docker/cp-test_ha-668100-m04_ha-668100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 cp ha-668100-m04:/home/docker/cp-test.txt ha-668100-m03:/home/docker/cp-test_ha-668100-m04_ha-668100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 ssh -n ha-668100-m03 "sudo cat /home/docker/cp-test_ha-668100-m04_ha-668100-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (32.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 node stop m02 --alsologtostderr -v 5
E1227 20:18:32.726313   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 node stop m02 --alsologtostderr -v 5: (11.8633238s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5: exit status 7 (1.4894939s)

                                                
                                                
-- stdout --
	ha-668100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-668100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-668100-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-668100-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:18:42.357232    3256 out.go:360] Setting OutFile to fd 1708 ...
	I1227 20:18:42.398795    3256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:18:42.398795    3256 out.go:374] Setting ErrFile to fd 1916...
	I1227 20:18:42.398795    3256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:18:42.413958    3256 out.go:368] Setting JSON to false
	I1227 20:18:42.413958    3256 mustload.go:66] Loading cluster: ha-668100
	I1227 20:18:42.413958    3256 notify.go:221] Checking for updates...
	I1227 20:18:42.413958    3256 config.go:182] Loaded profile config "ha-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:18:42.413958    3256 status.go:174] checking status of ha-668100 ...
	I1227 20:18:42.421954    3256 cli_runner.go:164] Run: docker container inspect ha-668100 --format={{.State.Status}}
	I1227 20:18:42.478862    3256 status.go:371] ha-668100 host status = "Running" (err=<nil>)
	I1227 20:18:42.478862    3256 host.go:66] Checking if "ha-668100" exists ...
	I1227 20:18:42.484119    3256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-668100
	I1227 20:18:42.539404    3256 host.go:66] Checking if "ha-668100" exists ...
	I1227 20:18:42.544762    3256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:18:42.547170    3256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-668100
	I1227 20:18:42.603537    3256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57570 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-668100\id_rsa Username:docker}
	I1227 20:18:42.726234    3256 ssh_runner.go:195] Run: systemctl --version
	I1227 20:18:42.742123    3256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:18:42.763057    3256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-668100
	I1227 20:18:42.820095    3256 kubeconfig.go:125] found "ha-668100" server: "https://127.0.0.1:57569"
	I1227 20:18:42.820095    3256 api_server.go:166] Checking apiserver status ...
	I1227 20:18:42.825715    3256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:18:42.853039    3256 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2376/cgroup
	I1227 20:18:42.865502    3256 api_server.go:192] apiserver freezer: "7:freezer:/docker/3fa4e72809bee3ff6212b461f5053519cfc3ced8e2985830f74599d724743cea/kubepods/burstable/pod84c10f5a78e558e1eca8c81cf0272fc5/35fe3be92075897bc734ca06c5b63c414a67c3d0b94051939c5296af8ec34547"
	I1227 20:18:42.870384    3256 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3fa4e72809bee3ff6212b461f5053519cfc3ced8e2985830f74599d724743cea/kubepods/burstable/pod84c10f5a78e558e1eca8c81cf0272fc5/35fe3be92075897bc734ca06c5b63c414a67c3d0b94051939c5296af8ec34547/freezer.state
	I1227 20:18:42.884964    3256 api_server.go:214] freezer state: "THAWED"
	I1227 20:18:42.884964    3256 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:57569/healthz ...
	I1227 20:18:42.895866    3256 api_server.go:325] https://127.0.0.1:57569/healthz returned 200:
	ok
	I1227 20:18:42.895866    3256 status.go:463] ha-668100 apiserver status = Running (err=<nil>)
	I1227 20:18:42.895866    3256 status.go:176] ha-668100 status: &{Name:ha-668100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:18:42.895866    3256 status.go:174] checking status of ha-668100-m02 ...
	I1227 20:18:42.903558    3256 cli_runner.go:164] Run: docker container inspect ha-668100-m02 --format={{.State.Status}}
	I1227 20:18:42.955524    3256 status.go:371] ha-668100-m02 host status = "Stopped" (err=<nil>)
	I1227 20:18:42.955524    3256 status.go:384] host is not running, skipping remaining checks
	I1227 20:18:42.955524    3256 status.go:176] ha-668100-m02 status: &{Name:ha-668100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:18:42.955524    3256 status.go:174] checking status of ha-668100-m03 ...
	I1227 20:18:42.963412    3256 cli_runner.go:164] Run: docker container inspect ha-668100-m03 --format={{.State.Status}}
	I1227 20:18:43.016864    3256 status.go:371] ha-668100-m03 host status = "Running" (err=<nil>)
	I1227 20:18:43.016864    3256 host.go:66] Checking if "ha-668100-m03" exists ...
	I1227 20:18:43.020847    3256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-668100-m03
	I1227 20:18:43.076837    3256 host.go:66] Checking if "ha-668100-m03" exists ...
	I1227 20:18:43.081532    3256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:18:43.084753    3256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-668100-m03
	I1227 20:18:43.141378    3256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57693 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-668100-m03\id_rsa Username:docker}
	I1227 20:18:43.275386    3256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:18:43.301412    3256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-668100
	I1227 20:18:43.356550    3256 kubeconfig.go:125] found "ha-668100" server: "https://127.0.0.1:57569"
	I1227 20:18:43.356613    3256 api_server.go:166] Checking apiserver status ...
	I1227 20:18:43.360432    3256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:18:43.385832    3256 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2308/cgroup
	I1227 20:18:43.400066    3256 api_server.go:192] apiserver freezer: "7:freezer:/docker/7e8b30ae59e73bbfc77a7ca16ffd50bb558b79ce2fa950f66fbd702b7beb691f/kubepods/burstable/pod25b22292b87d44407e9a761e9736aea8/5fa0be471ec573c9f8242b307fa83ad463cc3f285a4795508092afdf8b51b44f"
	I1227 20:18:43.405430    3256 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7e8b30ae59e73bbfc77a7ca16ffd50bb558b79ce2fa950f66fbd702b7beb691f/kubepods/burstable/pod25b22292b87d44407e9a761e9736aea8/5fa0be471ec573c9f8242b307fa83ad463cc3f285a4795508092afdf8b51b44f/freezer.state
	I1227 20:18:43.424600    3256 api_server.go:214] freezer state: "THAWED"
	I1227 20:18:43.424669    3256 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:57569/healthz ...
	I1227 20:18:43.435082    3256 api_server.go:325] https://127.0.0.1:57569/healthz returned 200:
	ok
	I1227 20:18:43.435082    3256 status.go:463] ha-668100-m03 apiserver status = Running (err=<nil>)
	I1227 20:18:43.435082    3256 status.go:176] ha-668100-m03 status: &{Name:ha-668100-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:18:43.435082    3256 status.go:174] checking status of ha-668100-m04 ...
	I1227 20:18:43.442088    3256 cli_runner.go:164] Run: docker container inspect ha-668100-m04 --format={{.State.Status}}
	I1227 20:18:43.495600    3256 status.go:371] ha-668100-m04 host status = "Running" (err=<nil>)
	I1227 20:18:43.495647    3256 host.go:66] Checking if "ha-668100-m04" exists ...
	I1227 20:18:43.501551    3256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-668100-m04
	I1227 20:18:43.553937    3256 host.go:66] Checking if "ha-668100-m04" exists ...
	I1227 20:18:43.559449    3256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:18:43.562506    3256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-668100-m04
	I1227 20:18:43.617659    3256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57828 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-668100-m04\id_rsa Username:docker}
	I1227 20:18:43.733150    3256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:18:43.750611    3256 status.go:176] ha-668100-m04 status: &{Name:ha-668100-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5362388s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 node start m02 --alsologtostderr -v 5
E1227 20:19:13.687721   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 node start m02 --alsologtostderr -v 5: (44.994706s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5: (1.9597417s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (47.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9487785s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (167.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 stop --alsologtostderr -v 5: (39.2849878s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 start --wait true --alsologtostderr -v 5
E1227 20:20:31.652920   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:20:35.608792   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:21:54.700406   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 start --wait true --alsologtostderr -v 5: (2m8.3828492s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (167.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 node delete m03 --alsologtostderr -v 5: (12.3679819s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5: (1.4235201s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4846866s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 stop --alsologtostderr -v 5
E1227 20:22:51.766448   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 stop --alsologtostderr -v 5: (36.7235973s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5: exit status 7 (325.2608ms)

                                                
                                                
-- stdout --
	ha-668100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-668100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-668100-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:23:14.819809    9036 out.go:360] Setting OutFile to fd 2040 ...
	I1227 20:23:14.862986    9036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:23:14.862986    9036 out.go:374] Setting ErrFile to fd 1892...
	I1227 20:23:14.862986    9036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:23:14.873558    9036 out.go:368] Setting JSON to false
	I1227 20:23:14.873558    9036 mustload.go:66] Loading cluster: ha-668100
	I1227 20:23:14.874305    9036 notify.go:221] Checking for updates...
	I1227 20:23:14.874402    9036 config.go:182] Loaded profile config "ha-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:23:14.874402    9036 status.go:174] checking status of ha-668100 ...
	I1227 20:23:14.882676    9036 cli_runner.go:164] Run: docker container inspect ha-668100 --format={{.State.Status}}
	I1227 20:23:14.935816    9036 status.go:371] ha-668100 host status = "Stopped" (err=<nil>)
	I1227 20:23:14.935816    9036 status.go:384] host is not running, skipping remaining checks
	I1227 20:23:14.935816    9036 status.go:176] ha-668100 status: &{Name:ha-668100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:23:14.935816    9036 status.go:174] checking status of ha-668100-m02 ...
	I1227 20:23:14.943180    9036 cli_runner.go:164] Run: docker container inspect ha-668100-m02 --format={{.State.Status}}
	I1227 20:23:14.993638    9036 status.go:371] ha-668100-m02 host status = "Stopped" (err=<nil>)
	I1227 20:23:14.993638    9036 status.go:384] host is not running, skipping remaining checks
	I1227 20:23:14.993638    9036 status.go:176] ha-668100-m02 status: &{Name:ha-668100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:23:14.993638    9036 status.go:174] checking status of ha-668100-m04 ...
	I1227 20:23:15.000647    9036 cli_runner.go:164] Run: docker container inspect ha-668100-m04 --format={{.State.Status}}
	I1227 20:23:15.055317    9036 status.go:371] ha-668100-m04 host status = "Stopped" (err=<nil>)
	I1227 20:23:15.055317    9036 status.go:384] host is not running, skipping remaining checks
	I1227 20:23:15.055317    9036 status.go:176] ha-668100-m04 status: &{Name:ha-668100-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (76.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 start --wait true --alsologtostderr -v 5 --driver=docker
E1227 20:23:19.450444   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 start --wait true --alsologtostderr -v 5 --driver=docker: (1m14.8823227s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5: (1.4027795s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (76.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4424255s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 node add --control-plane --alsologtostderr -v 5
E1227 20:25:31.655949   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 node add --control-plane --alsologtostderr -v 5: (1m17.2796352s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-668100 status --alsologtostderr -v 5: (1.8825008s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9310636s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.93s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-437100 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-437100 --driver=docker: (40.7878682s)
--- PASS: TestImageBuild/serial/Setup (40.79s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-437100
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-437100: (4.4003724s)
--- PASS: TestImageBuild/serial/NormalBuild (4.40s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-437100
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-437100: (1.9771281s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-437100
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-437100: (1.2632643s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.26s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-437100
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-437100: (1.222256s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-336600 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
E1227 20:27:51.769155   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-336600 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m12.3887916s)
--- PASS: TestJSONOutput/start/Command (72.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.12s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-336600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-336600 --output=json --user=testUser: (1.115192s)
--- PASS: TestJSONOutput/pause/Command (1.12s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.9s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-336600 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.90s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-336600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-336600 --output=json --user=testUser: (12.011206s)
--- PASS: TestJSONOutput/stop/Command (12.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.67s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-878500 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-878500 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (214.7396ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"903bb48e-b28c-4c55-bc5d-d9e1388887c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-878500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b02384fe-1d37-4774-962c-f081cb6d78eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"047a869b-29e5-4cbb-9ba1-9db882eb2483","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1cddd1fd-41b3-4913-9263-a7a1be707973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"1a01ec3f-afe9-4761-acba-d336fd828c90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"d2f145a5-4863-451c-b33a-f0a17ee5568f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e1460dc7-0eb2-4626-9051-d45a0dbdceb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-878500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-878500
--- PASS: TestErrorJSONOutput (0.67s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (48.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-502200 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-502200 --network=: (45.1633055s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-502200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-502200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-502200: (3.4395118s)
--- PASS: TestKicCustomNetwork/create_custom_network (48.67s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (48.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-922500 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-922500 --network=bridge: (45.1871455s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-922500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-922500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-922500: (3.1641787s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (48.41s)

                                                
                                    
x
+
TestKicExistingNetwork (49.35s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 20:30:11.815371   13656 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 20:30:11.876367   13656 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 20:30:11.880750   13656 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 20:30:11.880786   13656 cli_runner.go:164] Run: docker network inspect existing-network
W1227 20:30:11.940674   13656 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 20:30:11.940708   13656 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 20:30:11.940708   13656 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 20:30:11.944283   13656 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:30:12.014941   13656 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ab09c0}
I1227 20:30:12.014941   13656 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1227 20:30:12.018664   13656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1227 20:30:12.075863   13656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1227 20:30:12.075863   13656 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1227 20:30:12.075863   13656 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1227 20:30:12.106873   13656 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:30:12.120178   13656 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0008b6c90}
I1227 20:30:12.121187   13656 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 20:30:12.125056   13656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 20:30:12.273784   13656 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-784300 --network=existing-network
E1227 20:30:31.658783   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-784300 --network=existing-network: (45.6532019s)
helpers_test.go:176: Cleaning up "existing-network-784300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-784300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-784300: (3.1119056s)
I1227 20:31:01.108785   13656 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (49.35s)

                                                
                                    
x
+
TestKicCustomSubnet (49.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-660800 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-660800 --subnet=192.168.60.0/24: (45.833941s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-660800 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-660800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-660800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-660800: (3.4554936s)
--- PASS: TestKicCustomSubnet (49.35s)

                                                
                                    
x
+
TestKicStaticIP (50.04s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-235900 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-235900 --static-ip=192.168.200.200: (46.1745145s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-235900 ip
helpers_test.go:176: Cleaning up "static-ip-235900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-235900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-235900: (3.552565s)
--- PASS: TestKicStaticIP (50.04s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (91.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-611900 --driver=docker
E1227 20:32:51.771577   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-611900 --driver=docker: (41.7680187s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-611900 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-611900 --driver=docker: (40.1181675s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-611900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1809873s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-611900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.2423399s)
helpers_test.go:176: Cleaning up "second-611900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-611900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-611900: (3.606846s)
helpers_test.go:176: Cleaning up "first-611900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-611900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-611900: (3.5838412s)
--- PASS: TestMinikubeProfile (91.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (13.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-734300 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial37914199\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
E1227 20:34:14.816800   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-734300 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial37914199\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (12.5590039s)
--- PASS: TestMountStart/serial/StartWithMountFirst (13.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-734300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (13.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-734300 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial37914199\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-734300 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial37914199\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (12.4946129s)
--- PASS: TestMountStart/serial/StartWithMountSecond (13.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.53s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-734300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.53s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-734300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-734300 --alsologtostderr -v=5: (2.4364904s)
--- PASS: TestMountStart/serial/DeleteFirst (2.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.52s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-734300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.52s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.85s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-734300
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-734300: (1.8490574s)
--- PASS: TestMountStart/serial/Stop (1.85s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-734300
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-734300: (9.7615271s)
--- PASS: TestMountStart/serial/RestartStopped (10.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-734300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.54s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-671700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1227 20:35:31.660284   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-671700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m1.5324421s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- rollout status deployment/busybox: (3.2904387s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-qlqgv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-zs5bz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-qlqgv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-zs5bz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-qlqgv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-zs5bz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-qlqgv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-qlqgv -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-zs5bz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671700 -- exec busybox-769dd8b7dd-zs5bz -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-671700 -v=5 --alsologtostderr
E1227 20:37:51.774493   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-671700 -v=5 --alsologtostderr: (52.1246788s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr: (1.2950875s)
--- PASS: TestMultiNode/serial/AddNode (53.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-671700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.3660066s)
--- PASS: TestMultiNode/serial/ProfileList (1.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (18.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671700 status --output json --alsologtostderr: (1.2895712s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp testdata\cp-test.txt multinode-671700:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3753929611\001\cp-test_multinode-671700.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700:/home/docker/cp-test.txt multinode-671700-m02:/home/docker/cp-test_multinode-671700_multinode-671700-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m02 "sudo cat /home/docker/cp-test_multinode-671700_multinode-671700-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700:/home/docker/cp-test.txt multinode-671700-m03:/home/docker/cp-test_multinode-671700_multinode-671700-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m03 "sudo cat /home/docker/cp-test_multinode-671700_multinode-671700-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp testdata\cp-test.txt multinode-671700-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3753929611\001\cp-test_multinode-671700-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700-m02:/home/docker/cp-test.txt multinode-671700:/home/docker/cp-test_multinode-671700-m02_multinode-671700.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700 "sudo cat /home/docker/cp-test_multinode-671700-m02_multinode-671700.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700-m02:/home/docker/cp-test.txt multinode-671700-m03:/home/docker/cp-test_multinode-671700-m02_multinode-671700-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m03 "sudo cat /home/docker/cp-test_multinode-671700-m02_multinode-671700-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp testdata\cp-test.txt multinode-671700-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3753929611\001\cp-test_multinode-671700-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700-m03:/home/docker/cp-test.txt multinode-671700:/home/docker/cp-test_multinode-671700-m03_multinode-671700.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700 "sudo cat /home/docker/cp-test_multinode-671700-m03_multinode-671700.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 cp multinode-671700-m03:/home/docker/cp-test.txt multinode-671700-m02:/home/docker/cp-test_multinode-671700-m03_multinode-671700-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 ssh -n multinode-671700-m02 "sudo cat /home/docker/cp-test_multinode-671700-m03_multinode-671700-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (18.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671700 node stop m03: (1.6693754s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-671700 status: exit status 7 (1.0195467s)

                                                
                                                
-- stdout --
	multinode-671700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-671700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-671700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr: exit status 7 (998.2872ms)

                                                
                                                
-- stdout --
	multinode-671700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-671700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-671700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:38:27.674398   13816 out.go:360] Setting OutFile to fd 576 ...
	I1227 20:38:27.718046   13816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:38:27.718073   13816 out.go:374] Setting ErrFile to fd 1464...
	I1227 20:38:27.718073   13816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:38:27.730300   13816 out.go:368] Setting JSON to false
	I1227 20:38:27.730343   13816 mustload.go:66] Loading cluster: multinode-671700
	I1227 20:38:27.730389   13816 notify.go:221] Checking for updates...
	I1227 20:38:27.730826   13816 config.go:182] Loaded profile config "multinode-671700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:38:27.730826   13816 status.go:174] checking status of multinode-671700 ...
	I1227 20:38:27.740382   13816 cli_runner.go:164] Run: docker container inspect multinode-671700 --format={{.State.Status}}
	I1227 20:38:27.796784   13816 status.go:371] multinode-671700 host status = "Running" (err=<nil>)
	I1227 20:38:27.796784   13816 host.go:66] Checking if "multinode-671700" exists ...
	I1227 20:38:27.801066   13816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-671700
	I1227 20:38:27.853207   13816 host.go:66] Checking if "multinode-671700" exists ...
	I1227 20:38:27.857215   13816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:38:27.860207   13816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-671700
	I1227 20:38:27.914828   13816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58978 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-671700\id_rsa Username:docker}
	I1227 20:38:28.031311   13816 ssh_runner.go:195] Run: systemctl --version
	I1227 20:38:28.045093   13816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:38:28.067171   13816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-671700
	I1227 20:38:28.120530   13816 kubeconfig.go:125] found "multinode-671700" server: "https://127.0.0.1:58982"
	I1227 20:38:28.120530   13816 api_server.go:166] Checking apiserver status ...
	I1227 20:38:28.125152   13816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:38:28.151554   13816 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2226/cgroup
	I1227 20:38:28.165009   13816 api_server.go:192] apiserver freezer: "7:freezer:/docker/23117f8d6a249db2442f27c836ba3b61501c72cb065e14b5b4bff2e7bd5bc866/kubepods/burstable/pod9d823aff209f457cd0e42b2b45741c5f/09f3807ce9774226e82dbf4bb318013822728236078fd589ea30aee4c89a8443"
	I1227 20:38:28.169413   13816 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/23117f8d6a249db2442f27c836ba3b61501c72cb065e14b5b4bff2e7bd5bc866/kubepods/burstable/pod9d823aff209f457cd0e42b2b45741c5f/09f3807ce9774226e82dbf4bb318013822728236078fd589ea30aee4c89a8443/freezer.state
	I1227 20:38:28.184023   13816 api_server.go:214] freezer state: "THAWED"
	I1227 20:38:28.184093   13816 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:58982/healthz ...
	I1227 20:38:28.194436   13816 api_server.go:325] https://127.0.0.1:58982/healthz returned 200:
	ok
	I1227 20:38:28.194436   13816 status.go:463] multinode-671700 apiserver status = Running (err=<nil>)
	I1227 20:38:28.194436   13816 status.go:176] multinode-671700 status: &{Name:multinode-671700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:38:28.194436   13816 status.go:174] checking status of multinode-671700-m02 ...
	I1227 20:38:28.201977   13816 cli_runner.go:164] Run: docker container inspect multinode-671700-m02 --format={{.State.Status}}
	I1227 20:38:28.255378   13816 status.go:371] multinode-671700-m02 host status = "Running" (err=<nil>)
	I1227 20:38:28.255378   13816 host.go:66] Checking if "multinode-671700-m02" exists ...
	I1227 20:38:28.260086   13816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-671700-m02
	I1227 20:38:28.316085   13816 host.go:66] Checking if "multinode-671700-m02" exists ...
	I1227 20:38:28.321045   13816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:38:28.324196   13816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-671700-m02
	I1227 20:38:28.378335   13816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59030 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-671700-m02\id_rsa Username:docker}
	I1227 20:38:28.502105   13816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:38:28.519463   13816 status.go:176] multinode-671700-m02 status: &{Name:multinode-671700-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:38:28.519463   13816 status.go:174] checking status of multinode-671700-m03 ...
	I1227 20:38:28.526660   13816 cli_runner.go:164] Run: docker container inspect multinode-671700-m03 --format={{.State.Status}}
	I1227 20:38:28.578584   13816 status.go:371] multinode-671700-m03 host status = "Stopped" (err=<nil>)
	I1227 20:38:28.578584   13816 status.go:384] host is not running, skipping remaining checks
	I1227 20:38:28.578584   13816 status.go:176] multinode-671700-m03 status: &{Name:multinode-671700-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.69s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 node start m03 -v=5 --alsologtostderr
E1227 20:38:34.710110   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671700 node start m03 -v=5 --alsologtostderr: (11.7756273s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671700 status -v=5 --alsologtostderr: (1.3174365s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-671700
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-671700
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-671700: (24.9642356s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-671700 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-671700 --wait=true -v=5 --alsologtostderr: (53.2244664s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-671700
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671700 node delete m03: (6.9855496s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 stop
E1227 20:40:31.664183   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671700 stop: (23.5286111s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-671700 status: exit status 7 (276.8566ms)

                                                
                                                
-- stdout --
	multinode-671700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-671700-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr: exit status 7 (268.5058ms)

                                                
                                                
-- stdout --
	multinode-671700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-671700-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:40:32.510497    8700 out.go:360] Setting OutFile to fd 1724 ...
	I1227 20:40:32.553502    8700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:40:32.553502    8700 out.go:374] Setting ErrFile to fd 1364...
	I1227 20:40:32.553502    8700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:40:32.563680    8700 out.go:368] Setting JSON to false
	I1227 20:40:32.563680    8700 mustload.go:66] Loading cluster: multinode-671700
	I1227 20:40:32.563680    8700 notify.go:221] Checking for updates...
	I1227 20:40:32.564715    8700 config.go:182] Loaded profile config "multinode-671700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 20:40:32.564715    8700 status.go:174] checking status of multinode-671700 ...
	I1227 20:40:32.572184    8700 cli_runner.go:164] Run: docker container inspect multinode-671700 --format={{.State.Status}}
	I1227 20:40:32.626296    8700 status.go:371] multinode-671700 host status = "Stopped" (err=<nil>)
	I1227 20:40:32.626337    8700 status.go:384] host is not running, skipping remaining checks
	I1227 20:40:32.626337    8700 status.go:176] multinode-671700 status: &{Name:multinode-671700 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:40:32.626392    8700 status.go:174] checking status of multinode-671700-m02 ...
	I1227 20:40:32.633699    8700 cli_runner.go:164] Run: docker container inspect multinode-671700-m02 --format={{.State.Status}}
	I1227 20:40:32.687161    8700 status.go:371] multinode-671700-m02 host status = "Stopped" (err=<nil>)
	I1227 20:40:32.687161    8700 status.go:384] host is not running, skipping remaining checks
	I1227 20:40:32.687664    8700 status.go:176] multinode-671700-m02 status: &{Name:multinode-671700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-671700 --wait=true -v=5 --alsologtostderr --driver=docker
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-671700 --wait=true -v=5 --alsologtostderr --driver=docker: (57.6064935s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671700 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-671700
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-671700-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-671700-m02 --driver=docker: exit status 14 (217.9782ms)

                                                
                                                
-- stdout --
	* [multinode-671700-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-671700-m02' is duplicated with machine name 'multinode-671700-m02' in profile 'multinode-671700'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-671700-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-671700-m03 --driver=docker: (42.0968394s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-671700
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-671700: exit status 80 (626.248ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-671700 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-671700-m03 already exists in multinode-671700-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_26.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-671700-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-671700-m03: (3.6567869s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.75s)

                                                
                                    
x
+
TestScheduledStopWindows (112.54s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-630300 --memory=3072 --driver=docker
E1227 20:42:51.778023   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-630300 --memory=3072 --driver=docker: (46.510112s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-630300 --schedule 5m
minikube stop output:

                                                
                                                
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-630300 -n scheduled-stop-630300
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-630300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-630300 --schedule 5s
minikube stop output:

                                                
                                                
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-630300
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-630300: exit status 7 (219.7369ms)

                                                
                                                
-- stdout --
	scheduled-stop-630300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-630300 -n scheduled-stop-630300
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-630300 -n scheduled-stop-630300: exit status 7 (217.1304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-630300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-630300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-630300: (2.4642054s)
--- PASS: TestScheduledStopWindows (112.54s)

                                                
                                    
x
+
TestInsufficientStorage (28.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-296300 --memory=3072 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-296300 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (24.6276465s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"823f2264-e8ed-42f8-b834-4fa0f1db5722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-296300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8bba892f-a037-4c3e-bf33-e7ac7392384a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"4097c680-a369-449f-acf2-7c902451a14a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"87372791-2227-48bf-8a20-2ff558c12c2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"efd077f7-7afe-400f-83f7-1565ef4c927a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"29783211-4986-442f-8ff5-3e8985322300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d5c5876e-da8e-4913-ab93-831bb8a646de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"aeaa42f9-c710-4db1-85d1-077657adf7ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e06a1965-f748-470f-bba8-c7b1c640e00c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"884422df-5926-47ab-9e54-ab83d87be195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"d20959f9-e6f2-4a35-b4df-dc0d9b224d4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-296300\" primary control-plane node in \"insufficient-storage-296300\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"518e99a6-460d-407f-b50c-720eb0be2859","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1f9df62-1f68-4ce6-a31a-64bab6667948","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"14f07829-9c16-4a75-99bd-f8b907461c44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-296300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-296300 --output=json --layout=cluster: exit status 7 (554.6496ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-296300","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-296300","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:44:42.513845    5012 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-296300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-296300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-296300 --output=json --layout=cluster: exit status 7 (550.4912ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-296300","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-296300","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:44:43.065779    5656 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-296300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1227 20:44:43.089037    5656 status.go:258] unable to read event log: stat: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-296300\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-296300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-296300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-296300: (2.7303162s)
--- PASS: TestInsufficientStorage (28.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (379.86s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3745774991.exe start -p running-upgrade-127300 --memory=3072 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3745774991.exe start -p running-upgrade-127300 --memory=3072 --vm-driver=docker: (1m1.3008904s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-127300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-127300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (5m9.4022421s)
helpers_test.go:176: Cleaning up "running-upgrade-127300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-127300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-127300: (8.3136687s)
--- PASS: TestRunningBinaryUpgrade (379.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (144.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-280800 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
E1227 20:47:51.780890   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-280800 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (57.3652711s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-280800 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-280800 --alsologtostderr: (3.8577532s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-280800 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-280800 status --format={{.Host}}: exit status 7 (223.6461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-280800 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-280800 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker: (46.7218208s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-280800 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-280800 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-280800 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker: exit status 106 (210.4912ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-280800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-280800
	    minikube start -p kubernetes-upgrade-280800 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2808002 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-280800 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-280800 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-280800 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker: (31.1393849s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-280800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-280800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-280800: (4.5248696s)
--- PASS: TestKubernetesUpgrade (144.17s)

                                                
                                    
x
+
TestMissingContainerUpgrade (192.5s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.217583682.exe start -p missing-upgrade-637800 --memory=3072 --driver=docker
E1227 20:45:31.666734   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.217583682.exe start -p missing-upgrade-637800 --memory=3072 --driver=docker: (1m47.907373s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-637800
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-637800: (2.3410431s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-637800
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-637800 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-637800 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m17.5281418s)
helpers_test.go:176: Cleaning up "missing-upgrade-637800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-637800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-637800: (3.6408687s)
--- PASS: TestMissingContainerUpgrade (192.50s)

                                                
                                    
x
+
TestPause/serial/Start (129.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-637800 --memory=3072 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-637800 --memory=3072 --install-addons=false --wait=all --driver=docker: (2m9.3072695s)
--- PASS: TestPause/serial/Start (129.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-637800 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-637800 --alsologtostderr -v=1 --driver=docker: (57.5085972s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (57.53s)

                                                
                                    
x
+
TestPause/serial/Pause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-637800 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-637800 --alsologtostderr -v=5: (1.1576041s)
--- PASS: TestPause/serial/Pause (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-637800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-637800 --output=json --layout=cluster: exit status 2 (642.0091ms)

                                                
                                                
-- stdout --
	{"Name":"pause-637800","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-637800","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.64s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-637800 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.99s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.41s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-637800 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-637800 --alsologtostderr -v=5: (1.4060913s)
--- PASS: TestPause/serial/PauseAgain (1.41s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-637800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-637800 --alsologtostderr -v=5: (4.3969061s)
--- PASS: TestPause/serial/DeletePaused (4.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (337.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3638635423.exe start -p stopped-upgrade-172600 --memory=3072 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3638635423.exe start -p stopped-upgrade-172600 --memory=3072 --vm-driver=docker: (59.7054161s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3638635423.exe -p stopped-upgrade-172600 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3638635423.exe -p stopped-upgrade-172600 stop: (2.0406046s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-172600 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-172600 --memory=3072 --alsologtostderr -v=1 --driver=docker: (4m35.6905305s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (337.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-637800
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-637800: exit status 1 (57.0098ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-637800: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.06s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (100.84s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-995900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker
E1227 20:50:31.670676   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-995900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker: (1m26.727616s)
preload_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-995900 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-995900 image pull ghcr.io/medyagh/image-mirrors/busybox:latest: (2.0459351s)
preload_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-995900
E1227 20:50:54.828492   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-995900: (12.0609279s)
--- PASS: TestPreload/Start-NoPreload-PullImage (100.84s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (48.43s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-995900 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-995900 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker: (47.9769821s)
preload_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-995900 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (48.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (240.7294ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-924000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --memory=3072 --alsologtostderr -v=5 --driver=docker: (44.2128358s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-924000 status -o json
E1227 20:52:51.783982   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-052200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (17.5389227s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-924000 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-924000 status -o json: exit status 2 (568.0254ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-924000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-924000
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-924000: (2.7983878s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (13.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (13.9404331s)
--- PASS: TestNoKubernetes/serial/Start (13.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-924000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-924000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (591.0329ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-windows-amd64.exe profile list: (6.1729702s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.1044124s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-924000
no_kubernetes_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-924000: (1.8960908s)
--- PASS: TestNoKubernetes/serial/Stop (1.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-172600
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-172600: (1.3416561s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (12.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --driver=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-924000 --driver=docker: (12.9165377s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (12.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-924000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-924000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (515.4477ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-619800 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-619800 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m36.4810553s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (94.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-246300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0
E1227 20:55:31.674262   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-246300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0: (1m34.9850191s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (94.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-619800 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c4f44184-ed92-46c1-8075-5812b21cca8c] Pending
helpers_test.go:353: "busybox" [c4f44184-ed92-46c1-8075-5812b21cca8c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c4f44184-ed92-46c1-8075-5812b21cca8c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0056321s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-619800 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-619800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-619800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3882759s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-619800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-619800 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-619800 --alsologtostderr -v=3: (12.1788512s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-619800 -n old-k8s-version-619800
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-619800 -n old-k8s-version-619800: exit status 7 (213.1384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-619800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (30.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-619800 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-619800 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (30.0983871s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-619800 -n old-k8s-version-619800
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (30.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-246300 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6394e7eb-80db-40f7-a0eb-bd59909596b6] Pending
helpers_test.go:353: "busybox" [6394e7eb-80db-40f7-a0eb-bd59909596b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6394e7eb-80db-40f7-a0eb-bd59909596b6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.0077252s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-246300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-246300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-246300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.482908s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-246300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-246300 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-246300 --alsologtostderr -v=3: (12.3271379s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (25.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-fjkpf" [344e3c88-f420-4732-b5ed-9c1c5e8552f3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-fjkpf" [344e3c88-f420-4732-b5ed-9c1c5e8552f3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 25.006515s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (25.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-246300 -n no-preload-246300
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-246300 -n no-preload-246300: exit status 7 (197.6997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-246300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-246300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-246300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0: (52.7117887s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-246300 -n no-preload-246300
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-fjkpf" [344e3c88-f420-4732-b5ed-9c1c5e8552f3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0064788s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-619800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-619800 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-619800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-619800 --alsologtostderr -v=1: (1.0867268s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-619800 -n old-k8s-version-619800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-619800 -n old-k8s-version-619800: exit status 2 (673.9676ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-619800 -n old-k8s-version-619800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-619800 -n old-k8s-version-619800: exit status 2 (751.7909ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-619800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-619800 --alsologtostderr -v=1: (1.1449742s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-619800 -n old-k8s-version-619800
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-619800 -n old-k8s-version-619800: (1.0912321s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-619800 -n old-k8s-version-619800
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (98.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-569900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-569900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.35.0: (1m38.866929s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (98.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zv8n9" [93b73928-1695-42b1-b6ca-1c5562685116] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0636443s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zv8n9" [93b73928-1695-42b1-b6ca-1c5562685116] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0061115s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-246300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-246300 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-246300 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-246300 --alsologtostderr -v=1: (1.2832767s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-246300 -n no-preload-246300
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-246300 -n no-preload-246300: exit status 2 (661.1185ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-246300 -n no-preload-246300
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-246300 -n no-preload-246300: exit status 2 (674.0957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-246300 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-246300 --alsologtostderr -v=1: (1.0100586s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-246300 -n no-preload-246300
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-246300 -n no-preload-246300: (1.0388308s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-246300 -n no-preload-246300
--- PASS: TestStartStop/group/no-preload/serial/Pause (5.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-739400 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-739400 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.35.0: (1m26.313472s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-998200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-998200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0: (57.6762247s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-998200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-998200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.9311971s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-998200 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-998200 --alsologtostderr -v=3: (12.2268479s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-569900 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f913fa47-f4b4-4dc4-af0d-48f01e3dea2f] Pending
helpers_test.go:353: "busybox" [f913fa47-f4b4-4dc4-af0d-48f01e3dea2f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f913fa47-f4b4-4dc4-af0d-48f01e3dea2f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.0108866s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-569900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-998200 -n newest-cni-998200
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-998200 -n newest-cni-998200: exit status 7 (212.7946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-998200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (23.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-998200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-998200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0: (23.0598162s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-998200 -n newest-cni-998200
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (23.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-569900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-569900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3891707s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-569900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-569900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-569900 --alsologtostderr -v=3: (12.356217s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-739400 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2235a5d5-6f8e-4fb8-aded-8fdbd05a9c46] Pending
helpers_test.go:353: "busybox" [2235a5d5-6f8e-4fb8-aded-8fdbd05a9c46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2235a5d5-6f8e-4fb8-aded-8fdbd05a9c46] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0070226s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-739400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-569900 -n embed-certs-569900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-569900 -n embed-certs-569900: exit status 7 (245.1596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-569900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (30.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-569900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-569900 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.35.0: (29.4553873s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-569900 -n embed-certs-569900
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (30.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-998200 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-739400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-739400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.378559s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-739400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-998200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-998200 --alsologtostderr -v=1: (1.3534131s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-998200 -n newest-cni-998200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-998200 -n newest-cni-998200: exit status 2 (660.8797ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-998200 -n newest-cni-998200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-998200 -n newest-cni-998200: exit status 2 (650.5ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-998200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-998200 --alsologtostderr -v=1: (1.0486387s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-998200 -n newest-cni-998200
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-998200 -n newest-cni-998200
--- PASS: TestStartStop/group/newest-cni/serial/Pause (5.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-739400 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-739400 --alsologtostderr -v=3: (12.3910111s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (6.59s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-dl-gcs-623800 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker
preload_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-dl-gcs-623800 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker: (6.0242557s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-623800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-dl-gcs-623800
--- PASS: TestPreload/PreloadSrc/gcs (6.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400: exit status 7 (225.5372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-739400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-739400 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-739400 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.35.0: (1m5.0148638s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.69s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (8.85s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-dl-github-036600 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker
preload_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-dl-github-036600 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker: (7.4745975s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-036600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-dl-github-036600
E1227 21:00:31.678345   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-192900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-dl-github-036600: (1.379053s)
--- PASS: TestPreload/PreloadSrc/github (8.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lmnhc" [9f5baf08-e831-459a-86bf-f5115b918c94] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lmnhc" [9f5baf08-e831-459a-86bf-f5115b918c94] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.0060069s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (2.08s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-dl-gcs-cached-803800 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker
preload_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-dl-gcs-cached-803800 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker: (1.4203245s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-803800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-dl-gcs-cached-803800
--- PASS: TestPreload/PreloadSrc/gcs-cached (2.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m33.7757072s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lmnhc" [9f5baf08-e831-459a-86bf-f5115b918c94] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.698602s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-569900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-569900 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-569900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-569900 --alsologtostderr -v=1: (1.4714511s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-569900 -n embed-certs-569900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-569900 -n embed-certs-569900: exit status 2 (727.6917ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-569900 -n embed-certs-569900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-569900 -n embed-certs-569900: exit status 2 (726.8117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-569900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-569900 --alsologtostderr -v=1: (1.0369282s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-569900 -n embed-certs-569900
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-569900 -n embed-certs-569900
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E1227 21:01:19.478896   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:19.484358   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:19.494884   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:19.515493   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:19.556730   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:19.637914   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:19.798927   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:20.119348   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:20.760265   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:22.041266   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:24.602405   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m13.0200313s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vxwk8" [d1f90bc6-90f8-4ab1-ba51-c080c177d9a2] Running
E1227 21:01:29.722882   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005329s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vxwk8" [d1f90bc6-90f8-4ab1-ba51-c080c177d9a2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0047062s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-739400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1227 21:01:39.964005   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-739400 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-739400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-739400 --alsologtostderr -v=1: (1.260905s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400: exit status 2 (617.7239ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400: exit status 2 (595.8132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-739400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-739400 --alsologtostderr -v=1: (1.007088s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400: (1.0661605s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-739400 -n default-k8s-diff-port-739400
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E1227 21:01:51.339390   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:51.345384   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:51.356393   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:51.377317   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:51.417911   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:51.498058   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:51.658750   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:51.979022   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:52.619745   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:53.900848   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:01:56.461554   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:02:00.445005   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:02:01.582568   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m24.5735141s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-630300 "pgrep -a kubelet"
I1227 21:02:09.458382   13656 config.go:182] Loaded profile config "auto-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-4nrlj" [9ba8f18a-265b-4b81-ac2f-f1aa504c806c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 21:02:11.823568   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-4nrlj" [9ba8f18a-265b-4b81-ac2f-f1aa504c806c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.0069109s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-5r6k5" [c83cd7c7-bcf9-4b25-a34e-660530d35e79] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0103935s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-630300 "pgrep -a kubelet"
I1227 21:02:22.245842   13656 config.go:182] Loaded profile config "flannel-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5bxfq" [a9de0ca7-1cc9-4398-8285-b184c323860d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-5bxfq" [a9de0ca7-1cc9-4398-8285-b184c323860d] Running
E1227 21:02:32.304676   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.0058916s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-630300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-630300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m27.9764484s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (99.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m39.1053936s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (99.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-630300 "pgrep -a kubelet"
I1227 21:03:15.877228   13656 config.go:182] Loaded profile config "enable-default-cni-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (24.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-630300 replace --force -f testdata\netcat-deployment.yaml: (1.4879266s)
I1227 21:03:17.503910   13656 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1227 21:03:17.860319   13656 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5ngpv" [386e4489-879a-4410-bcbd-d6ba8bcc134e] Pending
helpers_test.go:353: "netcat-5dd4ccdc4b-5ngpv" [386e4489-879a-4410-bcbd-d6ba8bcc134e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-5ngpv" [386e4489-879a-4410-bcbd-d6ba8bcc134e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 22.0088047s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (24.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m11.6548872s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-630300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (103.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (1m43.333129s)
--- PASS: TestNetworkPlugins/group/calico/Start (103.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-630300 "pgrep -a kubelet"
I1227 21:04:27.175328   13656 config.go:182] Loaded profile config "bridge-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (16.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-630300 replace --force -f testdata\netcat-deployment.yaml: (1.6520293s)
I1227 21:04:29.125961   13656 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7nxtf" [5c49e47b-20f9-4612-be94-967ce70c6abb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-7nxtf" [5c49e47b-20f9-4612-be94-967ce70c6abb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.0078253s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (16.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-630300 "pgrep -a kubelet"
I1227 21:04:34.394318   13656 config.go:182] Loaded profile config "custom-flannel-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-k8z5g" [9608a657-c125-44a6-be92-168f8a463863] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 21:04:35.187398   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-246300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-k8z5g" [9608a657-c125-44a6-be92-168f8a463863] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.0056228s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-630300 "pgrep -a kubelet"
I1227 21:04:39.184557   13656 config.go:182] Loaded profile config "kubenet-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (15.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-btjkx" [6f40422f-38ad-4576-8ebf-8dc3a5e09f27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-btjkx" [6f40422f-38ad-4576-8ebf-8dc3a5e09f27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 15.0065985s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (15.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-630300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-630300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-630300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (85.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m25.5910857s)
--- PASS: TestNetworkPlugins/group/false/Start (85.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-630300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m12.6321025s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-sb8bs" [8942932c-e6e9-4531-92ba-33f725bed8ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.009669s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-630300 "pgrep -a kubelet"
I1227 21:06:04.922754   13656 config.go:182] Loaded profile config "calico-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-98tnd" [bc062ae6-29ee-4e69-bb12-bea739660da2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-98tnd" [bc062ae6-29ee-4e69-bb12-bea739660da2] Running
E1227 21:06:19.213250   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-739400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.0079125s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-630300 exec deployment/netcat -- nslookup kubernetes.default
E1227 21:06:19.482657   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-630300 "pgrep -a kubelet"
I1227 21:06:46.329200   13656 config.go:182] Loaded profile config "false-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-fshz7" [d865562d-324e-4887-87d6-aaa2bbd26f9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 21:06:47.169987   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-619800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-fshz7" [d865562d-324e-4887-87d6-aaa2bbd26f9c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.0061132s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-mp28g" [2761378a-1967-4075-8016-5342e7d35b16] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.015127s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-630300 "pgrep -a kubelet"
I1227 21:06:59.672825   13656 config.go:182] Loaded profile config "kindnet-630300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (18.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-630300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-t9rpl" [1d09dc49-bf1b-4303-ad5f-02a1208648a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-t9rpl" [1d09dc49-bf1b-4303-ad5f-02a1208648a0] Running
E1227 21:07:12.527523   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:15.088514   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:15.689777   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:15.694963   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:15.705500   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:15.725917   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:15.766401   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:15.847638   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:16.008056   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:16.328502   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:07:16.969327   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 18.0104149s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (18.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-630300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-630300 exec deployment/netcat -- nslookup kubernetes.default
E1227 21:07:18.250143   13656 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-630300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-630300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    

Test skip (27/349)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.4961ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-6kpf7" [48aac8e3-5d4f-40e4-a927-3eda89e378fb] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0998734s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-ks6hf" [ea6f5f7b-80b1-4040-bc09-33c3c881c130] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00538s
addons_test.go:394: (dbg) Run:  kubectl --context addons-192900 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-192900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-192900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.6408432s)
addons_test.go:409: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable registry --alsologtostderr -v=1: (1.7362898s)
--- SKIP: TestAddons/parallel/Registry (22.64s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-192900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-192900 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-192900 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [844aad51-925d-4261-8a39-96747ca50d85] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [844aad51-925d-4261-8a39-96747ca50d85] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0154744s
I1227 20:02:26.818440   13656 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable ingress-dns --alsologtostderr -v=1: (2.1703106s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-192900 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-192900 addons disable ingress --alsologtostderr -v=1: (8.4391775s)
--- SKIP: TestAddons/parallel/Ingress (26.26s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 0 -p functional-052200 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 0 -p functional-052200 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 1532: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:65: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-052200 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-052200 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-kqwd4" [721693cf-e077-4e94-b42d-ba50f56f472f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-kqwd4" [721693cf-e077-4e94-b42d-ba50f56f472f] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0081258s
functional_test.go:1656: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-517500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-517500
--- SKIP: TestStartStop/group/disable-driver-mounts (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (9.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-630300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-630300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:/Users/jenkins.minikube4/minikube-integration/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:49:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:60006
name: running-upgrade-127300
- cluster:
certificate-authority: C:/Users/jenkins.minikube4/minikube-integration/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:49:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:60091
name: stopped-upgrade-172600
contexts:
- context:
cluster: running-upgrade-127300
user: running-upgrade-127300
name: running-upgrade-127300
- context:
cluster: stopped-upgrade-172600
user: stopped-upgrade-172600
name: stopped-upgrade-172600
current-context: ""
kind: Config
users:
- name: running-upgrade-127300
user:
client-certificate: C:/Users/jenkins.minikube4/minikube-integration/.minikube/profiles/running-upgrade-127300/client.crt
client-key: C:/Users/jenkins.minikube4/minikube-integration/.minikube/profiles/running-upgrade-127300/client.key
- name: stopped-upgrade-172600
user:
client-certificate: C:/Users/jenkins.minikube4/minikube-integration/.minikube/profiles/stopped-upgrade-172600/client.crt
client-key: C:/Users/jenkins.minikube4/minikube-integration/.minikube/profiles/stopped-upgrade-172600/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-630300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-630300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630300"

                                                
                                                
----------------------- debugLogs end: cilium-630300 [took: 8.6426332s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-630300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-630300
--- SKIP: TestNetworkPlugins/group/cilium (9.12s)

                                                
                                    
Copied to clipboard