Test Report: Docker_Windows 22352

                    
                      9a7985111956b2877773a073c576921d0f069a2d:2025-12-28:43023
                    
                

Test fail (3/349)

Order failed test Duration
52 TestForceSystemdFlag 568.11
53 TestForceSystemdEnv 523.7
58 TestErrorSpam/setup 42.73
x
+
TestForceSystemdFlag (568.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-550200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-550200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 109 (9m21.5507796s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-550200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-550200" primary control-plane node in "force-systemd-flag-550200" cluster
	* Pulling base image v0.0.48-1766884053-22351 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:19:09.945542   10956 out.go:360] Setting OutFile to fd 1596 ...
	I1228 07:19:10.026353   10956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:19:10.026419   10956 out.go:374] Setting ErrFile to fd 1664...
	I1228 07:19:10.026444   10956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:19:10.044977   10956 out.go:368] Setting JSON to false
	I1228 07:19:10.048224   10956 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6289,"bootTime":1766900060,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1228 07:19:10.048224   10956 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1228 07:19:10.053565   10956 out.go:179] * [force-systemd-flag-550200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1228 07:19:10.060278   10956 notify.go:221] Checking for updates...
	I1228 07:19:10.065228   10956 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 07:19:10.070252   10956 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:19:10.075776   10956 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1228 07:19:10.083237   10956 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:19:10.086684   10956 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:19:10.091324   10956 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:19:10.249711   10956 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1228 07:19:10.253911   10956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:19:10.682141   10956 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-28 07:19:10.661909501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:19:10.686141   10956 out.go:179] * Using the docker driver based on user configuration
	I1228 07:19:10.689153   10956 start.go:309] selected driver: docker
	I1228 07:19:10.689153   10956 start.go:928] validating driver "docker" against <nil>
	I1228 07:19:10.689153   10956 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:19:10.696144   10956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:19:11.041541   10956 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-28 07:19:11.020193516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:19:11.041541   10956 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:19:11.042541   10956 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:19:11.400618   10956 out.go:179] * Using Docker Desktop driver with root privileges
	I1228 07:19:11.414529   10956 cni.go:84] Creating CNI manager for ""
	I1228 07:19:11.414529   10956 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:19:11.414529   10956 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 07:19:11.414529   10956 start.go:353] cluster config:
	{Name:force-systemd-flag-550200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:19:11.452563   10956 out.go:179] * Starting "force-systemd-flag-550200" primary control-plane node in "force-systemd-flag-550200" cluster
	I1228 07:19:11.466043   10956 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 07:19:11.486723   10956 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:19:11.499037   10956 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:19:11.499073   10956 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:19:11.499299   10956 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1228 07:19:11.499367   10956 cache.go:65] Caching tarball of preloaded images
	I1228 07:19:11.499657   10956 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:19:11.499816   10956 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 07:19:11.500398   10956 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\config.json ...
	I1228 07:19:11.500584   10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\config.json: {Name:mkc4f0fcb183c76eff9b9a6f79aae1fd565a77e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:11.578656   10956 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:19:11.578656   10956 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:19:11.578656   10956 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:19:11.578656   10956 start.go:360] acquireMachinesLock for force-systemd-flag-550200: {Name:mk1102644977e3c3e95d5da7d5c083d9caab1082 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:19:11.578656   10956 start.go:364] duration metric: took 0s to acquireMachinesLock for "force-systemd-flag-550200"
	I1228 07:19:11.578656   10956 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-550200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:19:11.579658   10956 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:19:11.702088   10956 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:19:11.702738   10956 start.go:159] libmachine.API.Create for "force-systemd-flag-550200" (driver="docker")
	I1228 07:19:11.702828   10956 client.go:173] LocalClient.Create starting
	I1228 07:19:11.703359   10956 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1228 07:19:11.703644   10956 main.go:144] libmachine: Decoding PEM data...
	I1228 07:19:11.703693   10956 main.go:144] libmachine: Parsing certificate...
	I1228 07:19:11.703929   10956 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1228 07:19:11.703986   10956 main.go:144] libmachine: Decoding PEM data...
	I1228 07:19:11.703986   10956 main.go:144] libmachine: Parsing certificate...
	I1228 07:19:11.712765   10956 cli_runner.go:164] Run: docker network inspect force-systemd-flag-550200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:19:11.773354   10956 cli_runner.go:211] docker network inspect force-systemd-flag-550200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:19:11.777358   10956 network_create.go:284] running [docker network inspect force-systemd-flag-550200] to gather additional debugging logs...
	I1228 07:19:11.777358   10956 cli_runner.go:164] Run: docker network inspect force-systemd-flag-550200
	W1228 07:19:11.825357   10956 cli_runner.go:211] docker network inspect force-systemd-flag-550200 returned with exit code 1
	I1228 07:19:11.825357   10956 network_create.go:287] error running [docker network inspect force-systemd-flag-550200]: docker network inspect force-systemd-flag-550200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-550200 not found
	I1228 07:19:11.825357   10956 network_create.go:289] output of [docker network inspect force-systemd-flag-550200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-550200 not found
	
	** /stderr **
	I1228 07:19:11.830360   10956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:19:11.905372   10956 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:19:11.937356   10956 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:19:11.954358   10956 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001772690}
	I1228 07:19:11.954358   10956 network_create.go:124] attempt to create docker network force-systemd-flag-550200 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1228 07:19:11.957357   10956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-550200 force-systemd-flag-550200
	W1228 07:19:12.027561   10956 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-550200 force-systemd-flag-550200 returned with exit code 1
	W1228 07:19:12.027561   10956 network_create.go:149] failed to create docker network force-systemd-flag-550200 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-550200 force-systemd-flag-550200: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1228 07:19:12.027561   10956 network_create.go:116] failed to create docker network force-systemd-flag-550200 192.168.67.0/24, will retry: subnet is taken
	I1228 07:19:12.059559   10956 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:19:12.091557   10956 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:19:12.123547   10956 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:19:12.137565   10956 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001862060}
	I1228 07:19:12.137565   10956 network_create.go:124] attempt to create docker network force-systemd-flag-550200 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1228 07:19:12.141555   10956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-550200 force-systemd-flag-550200
	I1228 07:19:12.308553   10956 network_create.go:108] docker network force-systemd-flag-550200 192.168.94.0/24 created
	I1228 07:19:12.308553   10956 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-550200" container
	I1228 07:19:12.315552   10956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:19:12.394439   10956 cli_runner.go:164] Run: docker volume create force-systemd-flag-550200 --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:19:12.455418   10956 oci.go:103] Successfully created a docker volume force-systemd-flag-550200
	I1228 07:19:12.459420   10956 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-550200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --entrypoint /usr/bin/test -v force-systemd-flag-550200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:19:14.418157   10956 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-550200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --entrypoint /usr/bin/test -v force-systemd-flag-550200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib: (1.9587086s)
	I1228 07:19:14.418157   10956 oci.go:107] Successfully prepared a docker volume force-systemd-flag-550200
	I1228 07:19:14.418157   10956 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:19:14.418157   10956 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:19:14.429082   10956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-550200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:20:02.662084   10956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-550200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (48.2323051s)
	I1228 07:20:02.662084   10956 kic.go:203] duration metric: took 48.2432293s to extract preloaded images to volume ...
	I1228 07:20:02.668547   10956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:20:03.082145   10956 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:89 SystemTime:2025-12-28 07:20:03.060406893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:20:03.086147   10956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:20:03.526431   10956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-550200 --name force-systemd-flag-550200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-550200 --network force-systemd-flag-550200 --ip 192.168.94.2 --volume force-systemd-flag-550200:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:20:06.127125   10956 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-550200 --name force-systemd-flag-550200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-550200 --network force-systemd-flag-550200 --ip 192.168.94.2 --volume force-systemd-flag-550200:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1: (2.6005485s)
	I1228 07:20:06.132308   10956 cli_runner.go:164] Run: docker container inspect force-systemd-flag-550200 --format={{.State.Running}}
	I1228 07:20:06.199542   10956 cli_runner.go:164] Run: docker container inspect force-systemd-flag-550200 --format={{.State.Status}}
	I1228 07:20:06.266538   10956 cli_runner.go:164] Run: docker exec force-systemd-flag-550200 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:20:06.405553   10956 oci.go:144] the created container "force-systemd-flag-550200" has a running status.
	I1228 07:20:06.405553   10956 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa...
	I1228 07:20:06.576001   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:20:06.591993   10956 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:20:06.682003   10956 cli_runner.go:164] Run: docker container inspect force-systemd-flag-550200 --format={{.State.Status}}
	I1228 07:20:06.760031   10956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:20:06.760031   10956 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-550200 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:20:06.905852   10956 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa...
	I1228 07:20:09.856351   10956 cli_runner.go:164] Run: docker container inspect force-systemd-flag-550200 --format={{.State.Status}}
	I1228 07:20:09.920362   10956 machine.go:94] provisionDockerMachine start ...
	I1228 07:20:09.926359   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:10.000354   10956 main.go:144] libmachine: Using SSH client type: native
	I1228 07:20:10.018361   10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 54898 <nil> <nil>}
	I1228 07:20:10.018361   10956 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:20:10.206226   10956 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-550200
	
	I1228 07:20:10.206321   10956 ubuntu.go:182] provisioning hostname "force-systemd-flag-550200"
	I1228 07:20:10.211897   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:10.267874   10956 main.go:144] libmachine: Using SSH client type: native
	I1228 07:20:10.267874   10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 54898 <nil> <nil>}
	I1228 07:20:10.267874   10956 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-550200 && echo "force-systemd-flag-550200" | sudo tee /etc/hostname
	I1228 07:20:10.456395   10956 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-550200
	
	I1228 07:20:10.460844   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:10.523621   10956 main.go:144] libmachine: Using SSH client type: native
	I1228 07:20:10.524620   10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 54898 <nil> <nil>}
	I1228 07:20:10.524620   10956 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-550200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-550200/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-550200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:20:10.684283   10956 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:20:10.684283   10956 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1228 07:20:10.684853   10956 ubuntu.go:190] setting up certificates
	I1228 07:20:10.684853   10956 provision.go:84] configureAuth start
	I1228 07:20:10.688659   10956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-550200
	I1228 07:20:10.747751   10956 provision.go:143] copyHostCerts
	I1228 07:20:10.747751   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1228 07:20:10.747751   10956 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1228 07:20:10.747751   10956 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1228 07:20:10.747751   10956 provision.go:87] duration metric: took 62.8974ms to configureAuth
	W1228 07:20:10.747751   10956 ubuntu.go:193] configureAuth failed: transferring file: &{BaseAsset:{SourcePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem TargetDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube TargetName:ca.pem Permissions:0777 Source:} reader:0xc001e120c0 writer:<nil> file:0xc00082cad8}: error removing file C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem: remove C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem: The process cannot access the file because it is being used by another process.
	I1228 07:20:10.748747   10956 retry.go:84] will retry after 0s: Temporary Error: transferring file: &{BaseAsset:{SourcePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem TargetDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube TargetName:ca.pem Permissions:0777 Source:} reader:0xc001e120c0 writer:<nil> file:0xc00082cad8}: error removing file C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem: remove C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem: The process cannot access the file because it is being used by another process.
	I1228 07:20:10.749747   10956 provision.go:84] configureAuth start
	I1228 07:20:10.752749   10956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-550200
	I1228 07:20:10.804747   10956 provision.go:143] copyHostCerts
	I1228 07:20:10.804747   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1228 07:20:10.805744   10956 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1228 07:20:10.805744   10956 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1228 07:20:10.805744   10956 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1228 07:20:10.806758   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1228 07:20:10.806758   10956 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1228 07:20:10.806758   10956 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1228 07:20:10.806758   10956 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1228 07:20:10.807752   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1228 07:20:10.807752   10956 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1228 07:20:10.807752   10956 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1228 07:20:10.807752   10956 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1228 07:20:10.808749   10956 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-550200 san=[127.0.0.1 192.168.94.2 force-systemd-flag-550200 localhost minikube]
	I1228 07:20:10.979957   10956 provision.go:177] copyRemoteCerts
	I1228 07:20:10.985236   10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:20:10.989412   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:11.040612   10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
	I1228 07:20:11.161907   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1228 07:20:11.161907   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:20:11.190869   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1228 07:20:11.191871   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I1228 07:20:11.217867   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1228 07:20:11.217867   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:20:11.242871   10956 provision.go:87] duration metric: took 493.1172ms to configureAuth
	I1228 07:20:11.242871   10956 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:20:11.242871   10956 config.go:182] Loaded profile config "force-systemd-flag-550200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:20:11.246869   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:11.302720   10956 main.go:144] libmachine: Using SSH client type: native
	I1228 07:20:11.302919   10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 54898 <nil> <nil>}
	I1228 07:20:11.302919   10956 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1228 07:20:11.494758   10956 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1228 07:20:11.494787   10956 ubuntu.go:71] root file system type: overlay
	I1228 07:20:11.494985   10956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1228 07:20:11.501016   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:11.560421   10956 main.go:144] libmachine: Using SSH client type: native
	I1228 07:20:11.561432   10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 54898 <nil> <nil>}
	I1228 07:20:11.561432   10956 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1228 07:20:11.738665   10956 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1228 07:20:11.741664   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:11.810983   10956 main.go:144] libmachine: Using SSH client type: native
	I1228 07:20:11.812244   10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 54898 <nil> <nil>}
	I1228 07:20:11.812244   10956 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1228 07:20:14.311277   10956 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-28 07:20:11.730362367 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1228 07:20:14.311277   10956 machine.go:97] duration metric: took 4.3908512s to provisionDockerMachine
	I1228 07:20:14.311277   10956 client.go:176] duration metric: took 1m2.6075446s to LocalClient.Create
	I1228 07:20:14.311277   10956 start.go:167] duration metric: took 1m2.6076338s to libmachine.API.Create "force-systemd-flag-550200"
	I1228 07:20:14.311277   10956 start.go:293] postStartSetup for "force-systemd-flag-550200" (driver="docker")
	I1228 07:20:14.311277   10956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:20:14.316543   10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:20:14.321090   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:14.371309   10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
	I1228 07:20:14.553998   10956 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:20:14.564061   10956 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:20:14.564061   10956 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:20:14.564061   10956 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1228 07:20:14.564061   10956 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1228 07:20:14.564975   10956 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> 135562.pem in /etc/ssl/certs
	I1228 07:20:14.564975   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> /etc/ssl/certs/135562.pem
	I1228 07:20:14.570653   10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:20:14.586316   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem --> /etc/ssl/certs/135562.pem (1708 bytes)
	I1228 07:20:14.621463   10956 start.go:296] duration metric: took 310.1814ms for postStartSetup
	I1228 07:20:14.630457   10956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-550200
	I1228 07:20:14.695864   10956 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\config.json ...
	I1228 07:20:14.706527   10956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:20:14.710655   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:14.765477   10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
	I1228 07:20:14.902812   10956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:20:14.914128   10956 start.go:128] duration metric: took 1m3.3335553s to createHost
	I1228 07:20:14.914128   10956 start.go:83] releasing machines lock for "force-systemd-flag-550200", held for 1m3.3345576s
	I1228 07:20:14.919119   10956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-550200
	I1228 07:20:14.988126   10956 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1228 07:20:14.991128   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:14.992125   10956 ssh_runner.go:195] Run: cat /version.json
	I1228 07:20:14.997119   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:15.058196   10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
	I1228 07:20:15.066206   10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
	W1228 07:20:15.275369   10956 start.go:879] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1228 07:20:15.284377   10956 ssh_runner.go:195] Run: systemctl --version
	I1228 07:20:15.307126   10956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:20:15.316403   10956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:20:15.323072   10956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:20:15.375677   10956 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 07:20:15.375677   10956 start.go:496] detecting cgroup driver to use...
	I1228 07:20:15.375677   10956 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:20:15.375677   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1228 07:20:15.379672   10956 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1228 07:20:15.379672   10956 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1228 07:20:15.402674   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:20:15.432172   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:20:15.449211   10956 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:20:15.454795   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:20:15.479910   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:20:15.496899   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:20:15.515899   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:20:15.534898   10956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:20:15.550898   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:20:15.569174   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:20:15.596504   10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:20:15.621758   10956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:20:15.639057   10956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:20:15.655039   10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:20:15.840106   10956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:20:16.048914   10956 start.go:496] detecting cgroup driver to use...
	I1228 07:20:16.048988   10956 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:20:16.055392   10956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1228 07:20:16.083565   10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:20:16.106782   10956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 07:20:16.178346   10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:20:16.204187   10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:20:16.223991   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:20:16.257916   10956 ssh_runner.go:195] Run: which cri-dockerd
	I1228 07:20:16.276284   10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1228 07:20:16.291306   10956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1228 07:20:16.318294   10956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1228 07:20:16.510441   10956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1228 07:20:16.659787   10956 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1228 07:20:16.659961   10956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1228 07:20:16.688458   10956 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1228 07:20:16.715937   10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:20:16.881963   10956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1228 07:20:23.640310   10956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.7582068s)
	I1228 07:20:23.644551   10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:20:23.677445   10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1228 07:20:23.706470   10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:20:23.733677   10956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1228 07:20:23.901526   10956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1228 07:20:24.068194   10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:20:24.273582   10956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1228 07:20:24.301765   10956 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1228 07:20:24.325840   10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:20:24.469463   10956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1228 07:20:24.579819   10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:20:24.687464   10956 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1228 07:20:24.693572   10956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1228 07:20:24.702875   10956 start.go:574] Will wait 60s for crictl version
	I1228 07:20:24.707624   10956 ssh_runner.go:195] Run: which crictl
	I1228 07:20:24.724794   10956 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:20:24.791350   10956 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1228 07:20:24.796094   10956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:20:24.838895   10956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:20:24.889882   10956 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1228 07:20:24.893890   10956 cli_runner.go:164] Run: docker exec -t force-systemd-flag-550200 dig +short host.docker.internal
	I1228 07:20:25.028382   10956 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1228 07:20:25.035386   10956 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1228 07:20:25.044398   10956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:20:25.063389   10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-flag-550200
	I1228 07:20:25.116387   10956 kubeadm.go:884] updating cluster {Name:force-systemd-flag-550200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:20:25.116387   10956 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:20:25.119394   10956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:20:25.153386   10956 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:20:25.153386   10956 docker.go:624] Images already preloaded, skipping extraction
	I1228 07:20:25.157386   10956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:20:25.189391   10956 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:20:25.189391   10956 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:20:25.189391   10956 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 docker true true} ...
	I1228 07:20:25.189391   10956 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-550200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:20:25.194408   10956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1228 07:20:25.273139   10956 cni.go:84] Creating CNI manager for ""
	I1228 07:20:25.273139   10956 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:20:25.273139   10956 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:20:25.273139   10956 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-550200 NodeName:force-systemd-flag-550200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:20:25.273139   10956 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-550200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:20:25.277124   10956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:20:25.290122   10956 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:20:25.295121   10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:20:25.307126   10956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1228 07:20:25.327132   10956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:20:25.347327   10956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1228 07:20:25.374403   10956 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:20:25.381332   10956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:20:25.402504   10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:20:25.573471   10956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:20:25.596187   10956 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200 for IP: 192.168.94.2
	I1228 07:20:25.596187   10956 certs.go:195] generating shared ca certs ...
	I1228 07:20:25.596187   10956 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:20:25.596721   10956 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1228 07:20:25.597181   10956 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1228 07:20:25.597298   10956 certs.go:257] generating profile certs ...
	I1228 07:20:25.597890   10956 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.key
	I1228 07:20:25.598089   10956 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.crt with IP's: []
	I1228 07:20:25.671035   10956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.crt ...
	I1228 07:20:25.671035   10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.crt: {Name:mkeca33edbc926c4db6950fc71e673d941c9c5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:20:25.671859   10956 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.key ...
	I1228 07:20:25.671859   10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.key: {Name:mka0644c661ad6783cefb18b8a346f500e1e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:20:25.672865   10956 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key.e9e44517
	I1228 07:20:25.672865   10956 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt.e9e44517 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1228 07:20:25.758013   10956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt.e9e44517 ...
	I1228 07:20:25.758013   10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt.e9e44517: {Name:mkd62fa7e059cebd6c3b5a6c81d7fbfca6ad136f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:20:25.758013   10956 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key.e9e44517 ...
	I1228 07:20:25.758013   10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key.e9e44517: {Name:mkf8cf27f77f5c6d7677777eb24e4b3275e15fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:20:25.759780   10956 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt.e9e44517 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt
	I1228 07:20:25.770464   10956 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key.e9e44517 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key
	I1228 07:20:25.788622   10956 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key
	I1228 07:20:25.789200   10956 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt with IP's: []
	I1228 07:20:25.874065   10956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt ...
	I1228 07:20:25.874065   10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt: {Name:mk5776f318ae9538168ceb01c5acd41dab52c41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:20:25.874431   10956 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key ...
	I1228 07:20:25.874431   10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key: {Name:mked8db6b11b02c360e25d18a4e35f554d068b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:20:25.875770   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:20:25.876171   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:20:25.876299   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:20:25.876380   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:20:25.876380   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:20:25.876380   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:20:25.876380   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:20:25.887020   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:20:25.887795   10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem (1338 bytes)
	W1228 07:20:25.888586   10956 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556_empty.pem, impossibly tiny 0 bytes
	I1228 07:20:25.888586   10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1228 07:20:25.888845   10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1228 07:20:25.889071   10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1228 07:20:25.889260   10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1228 07:20:25.889435   10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem (1708 bytes)
	I1228 07:20:25.889435   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> /usr/share/ca-certificates/135562.pem
	I1228 07:20:25.889435   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:20:25.889435   10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem -> /usr/share/ca-certificates/13556.pem
	I1228 07:20:25.890363   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:20:25.923428   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:20:25.959837   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:20:25.986565   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:20:26.020380   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:20:26.050111   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:20:26.076534   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:20:26.106184   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:20:26.140917   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem --> /usr/share/ca-certificates/135562.pem (1708 bytes)
	I1228 07:20:26.172919   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:20:26.202376   10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem --> /usr/share/ca-certificates/13556.pem (1338 bytes)
	I1228 07:20:26.235084   10956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:20:26.258400   10956 ssh_runner.go:195] Run: openssl version
	I1228 07:20:26.272403   10956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135562.pem
	I1228 07:20:26.288416   10956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135562.pem /etc/ssl/certs/135562.pem
	I1228 07:20:26.304399   10956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135562.pem
	I1228 07:20:26.312400   10956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:37 /usr/share/ca-certificates/135562.pem
	I1228 07:20:26.315406   10956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135562.pem
	I1228 07:20:26.366042   10956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:20:26.382049   10956 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/135562.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:20:26.398047   10956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:20:26.414042   10956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:20:26.435914   10956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:20:26.443191   10956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:20:26.447188   10956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:20:26.494187   10956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:20:26.512188   10956 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:20:26.531196   10956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13556.pem
	I1228 07:20:26.548200   10956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13556.pem /etc/ssl/certs/13556.pem
	I1228 07:20:26.566191   10956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13556.pem
	I1228 07:20:26.573196   10956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:37 /usr/share/ca-certificates/13556.pem
	I1228 07:20:26.578180   10956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13556.pem
	I1228 07:20:26.633725   10956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:20:26.651406   10956 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13556.pem /etc/ssl/certs/51391683.0
	I1228 07:20:26.666405   10956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:20:26.673401   10956 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:20:26.673401   10956 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-550200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:20:26.676399   10956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1228 07:20:26.715823   10956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:20:26.736007   10956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:20:26.751952   10956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:20:26.757429   10956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:20:26.776073   10956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:20:26.776073   10956 kubeadm.go:158] found existing configuration files:
	
	I1228 07:20:26.780843   10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:20:26.799092   10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:20:26.802999   10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:20:26.827908   10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:20:26.840488   10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:20:26.844486   10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:20:26.859485   10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:20:26.872483   10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:20:26.875473   10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:20:26.891474   10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:20:26.903476   10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:20:26.908483   10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:20:26.926495   10956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:20:27.080496   10956 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1228 07:20:27.180486   10956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:20:27.323507   10956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:24:29.087485   10956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:24:29.087593   10956 kubeadm.go:319] 
	I1228 07:24:29.087827   10956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:24:29.093448   10956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:24:29.093586   10956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:24:29.093846   10956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:24:29.094037   10956 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1228 07:24:29.094198   10956 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1228 07:24:29.094198   10956 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1228 07:24:29.094198   10956 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1228 07:24:29.094198   10956 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1228 07:24:29.094198   10956 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1228 07:24:29.094732   10956 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1228 07:24:29.094804   10956 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1228 07:24:29.094804   10956 kubeadm.go:319] CONFIG_INET: enabled
	I1228 07:24:29.094804   10956 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1228 07:24:29.094804   10956 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1228 07:24:29.095343   10956 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1228 07:24:29.095450   10956 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1228 07:24:29.095691   10956 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1228 07:24:29.095894   10956 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1228 07:24:29.096129   10956 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1228 07:24:29.096334   10956 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1228 07:24:29.096481   10956 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1228 07:24:29.096641   10956 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1228 07:24:29.096745   10956 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1228 07:24:29.096903   10956 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1228 07:24:29.096903   10956 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1228 07:24:29.096903   10956 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1228 07:24:29.096903   10956 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1228 07:24:29.097575   10956 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1228 07:24:29.097666   10956 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1228 07:24:29.097666   10956 kubeadm.go:319] OS: Linux
	I1228 07:24:29.097666   10956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:24:29.097666   10956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:24:29.097666   10956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:24:29.097666   10956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:24:29.098188   10956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:24:29.098245   10956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:24:29.098308   10956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:24:29.098439   10956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:24:29.098499   10956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:24:29.098587   10956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:24:29.098743   10956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:24:29.098788   10956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:24:29.098788   10956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:24:29.102647   10956 out.go:252]   - Generating certificates and keys ...
	I1228 07:24:29.102647   10956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:24:29.102647   10956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:24:29.103216   10956 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:24:29.103250   10956 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:24:29.103250   10956 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:24:29.103250   10956 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:24:29.103250   10956 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:24:29.103908   10956 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 07:24:29.103908   10956 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:24:29.103908   10956 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 07:24:29.104556   10956 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:24:29.104626   10956 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:24:29.104626   10956 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:24:29.104626   10956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:24:29.104626   10956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:24:29.104626   10956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:24:29.105312   10956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:24:29.105560   10956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:24:29.105717   10956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:24:29.105953   10956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:24:29.106179   10956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:24:29.111391   10956 out.go:252]   - Booting up control plane ...
	I1228 07:24:29.111610   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:24:29.111680   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:24:29.111680   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:24:29.111680   10956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:24:29.112472   10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:24:29.112769   10956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:24:29.113035   10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:24:29.113180   10956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:24:29.113223   10956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:24:29.113764   10956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:24:29.113975   10956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001181899s
	I1228 07:24:29.113975   10956 kubeadm.go:319] 
	I1228 07:24:29.113975   10956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:24:29.113975   10956 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:24:29.113975   10956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:24:29.113975   10956 kubeadm.go:319] 
	I1228 07:24:29.114500   10956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:24:29.114577   10956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:24:29.114622   10956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:24:29.114622   10956 kubeadm.go:319] 
	W1228 07:24:29.114622   10956 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001181899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001181899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:24:29.119113   10956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1228 07:24:29.583949   10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:24:29.602639   10956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:24:29.608264   10956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:24:29.620750   10956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:24:29.620814   10956 kubeadm.go:158] found existing configuration files:
	
	I1228 07:24:29.625366   10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:24:29.640824   10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:24:29.647918   10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:24:29.671692   10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:24:29.685903   10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:24:29.690163   10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:24:29.708235   10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:24:29.721408   10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:24:29.725407   10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:24:29.744750   10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:24:29.759627   10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:24:29.765603   10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:24:29.781606   10956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:24:29.907425   10956 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1228 07:24:29.999388   10956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:24:30.121722   10956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:28:30.829097   10956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:28:30.829193   10956 kubeadm.go:319] 
	I1228 07:28:30.829521   10956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:28:30.834292   10956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:28:30.834969   10956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:28:30.835305   10956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:28:30.835470   10956 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1228 07:28:30.835661   10956 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1228 07:28:30.835885   10956 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1228 07:28:30.836054   10956 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_INET: enabled
	I1228 07:28:30.836845   10956 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1228 07:28:30.836935   10956 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1228 07:28:30.837045   10956 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1228 07:28:30.837188   10956 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1228 07:28:30.837348   10956 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1228 07:28:30.838073   10956 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1228 07:28:30.838144   10956 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1228 07:28:30.838144   10956 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1228 07:28:30.838144   10956 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1228 07:28:30.838144   10956 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1228 07:28:30.838754   10956 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1228 07:28:30.838917   10956 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1228 07:28:30.839077   10956 kubeadm.go:319] OS: Linux
	I1228 07:28:30.839105   10956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:28:30.839105   10956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:28:30.839105   10956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:28:30.839643   10956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:28:30.839812   10956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:28:30.840092   10956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:28:30.840238   10956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:28:30.840388   10956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:28:30.840415   10956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:28:30.840415   10956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:28:30.840415   10956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:28:30.841442   10956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:28:30.841442   10956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:28:30.845025   10956 out.go:252]   - Generating certificates and keys ...
	I1228 07:28:30.845350   10956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:28:30.846025   10956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:28:30.846065   10956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:28:30.846065   10956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:28:30.846065   10956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:28:30.846065   10956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:28:30.846707   10956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:28:30.846707   10956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:28:30.846707   10956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:28:30.846707   10956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:28:30.847386   10956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:28:30.847386   10956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:28:30.847386   10956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:28:30.847386   10956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:28:30.862925   10956 out.go:252]   - Booting up control plane ...
	I1228 07:28:30.863263   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:28:30.863453   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:28:30.863599   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:28:30.863920   10956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:28:30.864159   10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:28:30.864402   10956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:28:30.864711   10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:28:30.864711   10956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:28:30.864711   10956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:28:30.865367   10956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:28:30.865547   10956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001175587s
	I1228 07:28:30.865547   10956 kubeadm.go:319] 
	I1228 07:28:30.865705   10956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:28:30.865843   10956 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:28:30.866152   10956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:28:30.866189   10956 kubeadm.go:319] 
	I1228 07:28:30.866348   10956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:28:30.866348   10956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:28:30.866348   10956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:28:30.866348   10956 kubeadm.go:319] 
	I1228 07:28:30.866348   10956 kubeadm.go:403] duration metric: took 8m4.1856378s to StartCluster
	I1228 07:28:30.871002   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:30.892372   10956 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:30.896535   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:30.913072   10956 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:30.918213   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:30.943813   10956 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:30.950809   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:30.974259   10956 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:30.980208   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:31.009939   10956 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:31.015235   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:31.040595   10956 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:31.046595   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:31.070125   10956 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:31.070125   10956 logs.go:123] Gathering logs for kubelet ...
	I1228 07:28:31.070125   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:28:31.144442   10956 logs.go:123] Gathering logs for dmesg ...
	I1228 07:28:31.145427   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:28:31.190901   10956 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:28:31.190901   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:28:31.281419   10956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:28:31.270801   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.272185   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.272956   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.276447   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.277504   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:28:31.270801   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.272185   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.272956   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.276447   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.277504   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:28:31.281419   10956 logs.go:123] Gathering logs for Docker ...
	I1228 07:28:31.281419   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:28:31.314672   10956 logs.go:123] Gathering logs for container status ...
	I1228 07:28:31.314672   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1228 07:28:31.381927   10956 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:28:31.381958   10956 out.go:285] * 
	* 
	W1228 07:28:31.381958   10956 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:28:31.381958   10956 out.go:285] * 
	* 
	W1228 07:28:31.381958   10956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:28:31.388026   10956 out.go:203] 
	W1228 07:28:31.391609   10956 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:28:31.391609   10956 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:28:31.391609   10956 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:28:31.394315   10956 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-550200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-550200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-28 07:28:32.4921253 +0000 UTC m=+3630.002556201
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-550200
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-550200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f",
	        "Created": "2025-12-28T07:20:03.584345984Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 181825,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:20:05.584805917Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f/hosts",
	        "LogPath": "/var/lib/docker/containers/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f-json.log",
	        "Name": "/force-systemd-flag-550200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-550200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-550200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7d460ac1e4172c0c01df46c83e1759ddbc23cdf15ffe05923b58c670d122017c-init/diff:/var/lib/docker/overlay2/755790e5dd4d70e5001883ef2a2cf79adb7d5054e85cb9aeffa64c965a5cf81c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d460ac1e4172c0c01df46c83e1759ddbc23cdf15ffe05923b58c670d122017c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d460ac1e4172c0c01df46c83e1759ddbc23cdf15ffe05923b58c670d122017c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d460ac1e4172c0c01df46c83e1759ddbc23cdf15ffe05923b58c670d122017c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-550200",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-550200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-550200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-550200",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-550200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b01aa59cf3dbb7fff2defbbdb819e3432807283d3712a450b4622997252042e",
	            "SandboxKey": "/var/run/docker/netns/7b01aa59cf3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54900"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54901"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54902"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-550200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "072878f0256b28fadd181fa98b6ffd57a25d8bc213f05f4c604fe7261bee4292",
	                    "EndpointID": "415f7dede622756688d9ccc5c6418bf3081b28ab1cf92831c96cd01d6c45c653",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-550200",
	                        "14a46fe5d933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-550200 -n force-systemd-flag-550200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-550200 -n force-systemd-flag-550200: exit status 6 (626.4862ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:28:33.147791    9504 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-550200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-550200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-550200 logs -n 25: (1.1171517s)
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                ARGS                                                │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-410600 sudo cat /usr/lib/systemd/system/cri-docker.service                               │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo cri-dockerd --version                                                        │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo systemctl status containerd --all --full --no-pager                          │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo systemctl cat containerd --no-pager                                          │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo cat /lib/systemd/system/containerd.service                                   │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo cat /etc/containerd/config.toml                                              │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo containerd config dump                                                       │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo systemctl status crio --all --full --no-pager                                │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo systemctl cat crio --no-pager                                                │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                      │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-410600 sudo crio config                                                                  │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ delete  │ -p cilium-410600                                                                                   │ cilium-410600             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ 28 Dec 25 07:24 UTC │
	│ start   │ -p force-systemd-env-970200 --memory=3072 --alsologtostderr -v=5 --driver=docker                   │ force-systemd-env-970200  │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │                     │
	│ delete  │ -p stopped-upgrade-550200                                                                          │ stopped-upgrade-550200    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:25 UTC │ 28 Dec 25 07:25 UTC │
	│ start   │ -p missing-upgrade-224300 --memory=3072 --driver=docker                                            │ missing-upgrade-224300    │ minikube4\jenkins │ v1.35.0 │ 28 Dec 25 07:25 UTC │ 28 Dec 25 07:26 UTC │
	│ start   │ -p cert-expiration-709700 --memory=3072 --cert-expiration=8760h --driver=docker                    │ cert-expiration-709700    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:25 UTC │ 28 Dec 25 07:26 UTC │
	│ start   │ -p missing-upgrade-224300 --memory=3072 --alsologtostderr -v=1 --driver=docker                     │ missing-upgrade-224300    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:26 UTC │ 28 Dec 25 07:27 UTC │
	│ delete  │ -p cert-expiration-709700                                                                          │ cert-expiration-709700    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:26 UTC │ 28 Dec 25 07:26 UTC │
	│ start   │ -p test-preload-362600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker │ test-preload-362600       │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:26 UTC │ 28 Dec 25 07:28 UTC │
	│ delete  │ -p missing-upgrade-224300                                                                          │ missing-upgrade-224300    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:27 UTC │ 28 Dec 25 07:27 UTC │
	│ start   │ -p running-upgrade-509300 --memory=3072 --vm-driver=docker                                         │ running-upgrade-509300    │ minikube4\jenkins │ v1.35.0 │ 28 Dec 25 07:27 UTC │                     │
	│ image   │ test-preload-362600 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                        │ test-preload-362600       │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:28 UTC │ 28 Dec 25 07:28 UTC │
	│ stop    │ -p test-preload-362600                                                                             │ test-preload-362600       │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:28 UTC │ 28 Dec 25 07:28 UTC │
	│ start   │ -p test-preload-362600 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker           │ test-preload-362600       │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:28 UTC │                     │
	│ ssh     │ force-systemd-flag-550200 ssh docker info --format {{.CgroupDriver}}                               │ force-systemd-flag-550200 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:28 UTC │ 28 Dec 25 07:28 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:28:28
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:28:28.871886    8696 out.go:360] Setting OutFile to fd 1520 ...
	I1228 07:28:28.921737    8696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:28:28.921737    8696 out.go:374] Setting ErrFile to fd 1784...
	I1228 07:28:28.921737    8696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:28:28.935146    8696 out.go:368] Setting JSON to false
	I1228 07:28:28.938154    8696 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6848,"bootTime":1766900060,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1228 07:28:28.938154    8696 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1228 07:28:28.942149    8696 out.go:179] * [test-preload-362600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1228 07:28:28.946142    8696 notify.go:221] Checking for updates...
	I1228 07:28:28.947149    8696 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 07:28:28.950144    8696 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:28:28.953146    8696 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1228 07:28:28.956143    8696 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:28:28.960148    8696 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:28:28.962148    8696 config.go:182] Loaded profile config "test-preload-362600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:28:28.963144    8696 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:28:29.073154    8696 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1228 07:28:29.076149    8696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:28:29.306079    8696 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:28:29.288331579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:28:29.309076    8696 out.go:179] * Using the docker driver based on existing profile
	I1228 07:28:29.313077    8696 start.go:309] selected driver: docker
	I1228 07:28:29.313077    8696 start.go:928] validating driver "docker" against &{Name:test-preload-362600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-362600 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:28:29.314080    8696 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:28:29.320085    8696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:28:29.566908    8696 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:28:29.547706565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:28:29.566908    8696 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:28:29.566908    8696 cni.go:84] Creating CNI manager for ""
	I1228 07:28:29.566908    8696 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:28:29.567906    8696 start.go:353] cluster config:
	{Name:test-preload-362600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-362600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:28:29.570905    8696 out.go:179] * Starting "test-preload-362600" primary control-plane node in "test-preload-362600" cluster
	I1228 07:28:29.575905    8696 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 07:28:29.577904    8696 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:28:29.581904    8696 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:28:29.581904    8696 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:28:29.581904    8696 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1228 07:28:29.581904    8696 cache.go:65] Caching tarball of preloaded images
	I1228 07:28:29.581904    8696 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:28:29.582898    8696 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 07:28:29.582898    8696 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\test-preload-362600\config.json ...
	I1228 07:28:29.654901    8696 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:28:29.654901    8696 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:28:29.654901    8696 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:28:29.654901    8696 start.go:360] acquireMachinesLock for test-preload-362600: {Name:mk0079c10dfd22d58b9f49240ef09a361a7938ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:28:29.654901    8696 start.go:364] duration metric: took 0s to acquireMachinesLock for "test-preload-362600"
	I1228 07:28:29.654901    8696 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:28:29.654901    8696 fix.go:54] fixHost starting: 
	I1228 07:28:29.663902    8696 cli_runner.go:164] Run: docker container inspect test-preload-362600 --format={{.State.Status}}
	I1228 07:28:29.724898    8696 fix.go:112] recreateIfNeeded on test-preload-362600: state=Stopped err=<nil>
	W1228 07:28:29.724898    8696 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:28:30.829097   10956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:28:30.829193   10956 kubeadm.go:319] 
	I1228 07:28:30.829521   10956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:28:30.834292   10956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:28:30.834969   10956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:28:30.835305   10956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:28:30.835470   10956 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1228 07:28:30.835661   10956 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1228 07:28:30.835885   10956 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1228 07:28:30.836054   10956 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1228 07:28:30.836105   10956 kubeadm.go:319] CONFIG_INET: enabled
	I1228 07:28:30.836845   10956 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1228 07:28:30.836935   10956 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1228 07:28:30.837045   10956 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1228 07:28:30.837188   10956 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1228 07:28:30.837348   10956 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1228 07:28:30.837367   10956 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1228 07:28:30.838073   10956 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1228 07:28:30.838144   10956 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1228 07:28:30.838144   10956 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1228 07:28:30.838144   10956 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1228 07:28:30.838144   10956 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1228 07:28:30.838754   10956 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1228 07:28:30.838917   10956 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1228 07:28:30.839077   10956 kubeadm.go:319] OS: Linux
	I1228 07:28:30.839105   10956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:28:30.839105   10956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:28:30.839105   10956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:28:30.839643   10956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:28:30.839812   10956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:28:30.840092   10956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:28:30.840238   10956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:28:30.840388   10956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:28:30.840415   10956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:28:30.840415   10956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:28:30.840415   10956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:28:30.841442   10956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:28:30.841442   10956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:28:30.845025   10956 out.go:252]   - Generating certificates and keys ...
	I1228 07:28:30.845350   10956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:28:30.845413   10956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:28:30.846025   10956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:28:30.846065   10956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:28:30.846065   10956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:28:30.846065   10956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:28:30.846065   10956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:28:30.846707   10956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:28:30.846707   10956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:28:30.846707   10956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:28:30.846707   10956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:28:30.847386   10956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:28:30.847386   10956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:28:30.847386   10956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:28:30.847386   10956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:28:30.862925   10956 out.go:252]   - Booting up control plane ...
	I1228 07:28:30.863263   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:28:30.863453   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:28:30.863599   10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:28:30.863920   10956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:28:30.864159   10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:28:30.864402   10956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:28:30.864711   10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:28:30.864711   10956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:28:30.864711   10956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:28:30.865367   10956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:28:30.865547   10956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001175587s
	I1228 07:28:30.865547   10956 kubeadm.go:319] 
	I1228 07:28:30.865705   10956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:28:30.865843   10956 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:28:30.866152   10956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:28:30.866189   10956 kubeadm.go:319] 
	I1228 07:28:30.866348   10956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:28:30.866348   10956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:28:30.866348   10956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:28:30.866348   10956 kubeadm.go:319] 
	I1228 07:28:30.866348   10956 kubeadm.go:403] duration metric: took 8m4.1856378s to StartCluster
	I1228 07:28:30.871002   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:30.892372   10956 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:30.896535   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:30.913072   10956 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:30.918213   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:30.943813   10956 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:30.950809   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:30.974259   10956 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:30.980208   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:31.009939   10956 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:31.015235   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:31.040595   10956 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:31.046595   10956 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:28:31.070125   10956 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:28:31.070125   10956 logs.go:123] Gathering logs for kubelet ...
	I1228 07:28:31.070125   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:28:31.144442   10956 logs.go:123] Gathering logs for dmesg ...
	I1228 07:28:31.145427   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:28:31.190901   10956 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:28:31.190901   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:28:31.281419   10956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:28:31.270801   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.272185   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.272956   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.276447   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.277504   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:28:31.270801   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.272185   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.272956   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.276447   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:31.277504   10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:28:31.281419   10956 logs.go:123] Gathering logs for Docker ...
	I1228 07:28:31.281419   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:28:31.314672   10956 logs.go:123] Gathering logs for container status ...
	I1228 07:28:31.314672   10956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1228 07:28:31.381927   10956 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:28:31.381958   10956 out.go:285] * 
	W1228 07:28:31.381958   10956 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:28:31.381958   10956 out.go:285] * 
	W1228 07:28:31.381958   10956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:28:31.388026   10956 out.go:203] 
	W1228 07:28:31.391609   10956 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:28:31.391609   10956 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:28:31.391609   10956 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:28:31.394315   10956 out.go:203] 
	I1228 07:28:31.944635   10488 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1228 07:28:31.944635   10488 kubeadm.go:310] [preflight] Running pre-flight checks
	I1228 07:28:31.944635   10488 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:28:31.944635   10488 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:28:31.945626   10488 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:28:31.945626   10488 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:28:31.947629   10488 out.go:235]   - Generating certificates and keys ...
	I1228 07:28:31.947629   10488 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1228 07:28:31.947629   10488 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1228 07:28:31.948621   10488 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:28:31.948621   10488 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:28:31.948621   10488 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:28:31.948621   10488 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1228 07:28:31.948621   10488 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1228 07:28:31.948621   10488 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost running-upgrade-509300] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1228 07:28:31.948621   10488 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1228 07:28:31.949625   10488 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost running-upgrade-509300] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1228 07:28:31.949625   10488 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:28:31.949625   10488 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:28:31.949625   10488 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1228 07:28:31.949625   10488 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:28:31.949625   10488 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:28:31.949625   10488 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:28:31.949625   10488 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:28:31.950630   10488 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:28:31.950630   10488 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:28:31.950630   10488 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:28:31.950630   10488 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:28:31.952631   10488 out.go:235]   - Booting up control plane ...
	I1228 07:28:31.952631   10488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:28:31.953636   10488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:28:31.953636   10488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:28:31.953636   10488 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:28:31.953636   10488 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:28:31.953636   10488 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1228 07:28:31.953636   10488 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:28:31.954653   10488 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:28:31.954653   10488 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002997436s
	I1228 07:28:31.954653   10488 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1228 07:28:31.954653   10488 kubeadm.go:310] [api-check] The API server is healthy after 7.002683961s
	I1228 07:28:31.954653   10488 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 07:28:31.955631   10488 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 07:28:31.955631   10488 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 07:28:31.955631   10488 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-509300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 07:28:31.955631   10488 kubeadm.go:310] [bootstrap-token] Using token: nn6gwz.03ll6pyso0maxojd
	I1228 07:28:31.959618   10488 out.go:235]   - Configuring RBAC rules ...
	I1228 07:28:31.959618   10488 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 07:28:31.959618   10488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 07:28:31.959618   10488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 07:28:31.960625   10488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 07:28:31.960625   10488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 07:28:31.960625   10488 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 07:28:31.960625   10488 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 07:28:31.960625   10488 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1228 07:28:31.961635   10488 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1228 07:28:31.961635   10488 kubeadm.go:310] 
	I1228 07:28:31.961635   10488 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1228 07:28:31.961635   10488 kubeadm.go:310] 
	I1228 07:28:31.961635   10488 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1228 07:28:31.961635   10488 kubeadm.go:310] 
	I1228 07:28:31.961635   10488 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1228 07:28:31.961635   10488 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 07:28:31.961635   10488 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 07:28:31.961635   10488 kubeadm.go:310] 
	I1228 07:28:31.961635   10488 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1228 07:28:31.961635   10488 kubeadm.go:310] 
	I1228 07:28:31.962640   10488 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 07:28:31.962640   10488 kubeadm.go:310] 
	I1228 07:28:31.962640   10488 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1228 07:28:31.962640   10488 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 07:28:31.962640   10488 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 07:28:31.962640   10488 kubeadm.go:310] 
	I1228 07:28:31.962640   10488 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 07:28:31.962640   10488 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1228 07:28:31.962640   10488 kubeadm.go:310] 
	I1228 07:28:31.963645   10488 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nn6gwz.03ll6pyso0maxojd \
	I1228 07:28:31.963645   10488 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fea1b033220c76616a69daefe9de210d60574273e9df21e09282f95b8582ae4 \
	I1228 07:28:31.963645   10488 kubeadm.go:310] 	--control-plane 
	I1228 07:28:31.963645   10488 kubeadm.go:310] 
	I1228 07:28:31.963645   10488 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1228 07:28:31.963645   10488 kubeadm.go:310] 
	I1228 07:28:31.963645   10488 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nn6gwz.03ll6pyso0maxojd \
	I1228 07:28:31.963645   10488 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fea1b033220c76616a69daefe9de210d60574273e9df21e09282f95b8582ae4 
	I1228 07:28:31.964648   10488 cni.go:84] Creating CNI manager for ""
	I1228 07:28:31.964648   10488 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:28:31.967635   10488 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1228 07:28:31.976634   10488 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1228 07:28:32.033654   10488 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1228 07:28:32.131098   10488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 07:28:32.143378   10488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:28:32.145892   10488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-509300 minikube.k8s.io/updated_at=2025_12_28T07_28_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=dd5d320e41b5451cdf3c01891bc4e13d189586ed-dirty minikube.k8s.io/name=running-upgrade-509300 minikube.k8s.io/primary=true
	I1228 07:28:32.150406   10488 ops.go:34] apiserver oom_adj: -16
	I1228 07:28:32.338382   10488 kubeadm.go:1113] duration metric: took 207.0472ms to wait for elevateKubeSystemPrivileges
	I1228 07:28:32.373410   10488 kubeadm.go:394] duration metric: took 12.9308811s to StartCluster
	I1228 07:28:32.373544   10488 settings.go:142] acquiring lock: {Name:mkac923b109dc030b95783d9963c0a5b20048f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:28:32.373594   10488 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\AppData\Local\Temp\legacy_kubeconfig572837400
	I1228 07:28:32.376045   10488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\AppData\Local\Temp\legacy_kubeconfig572837400: {Name:mk13db8e9f6987c9bdb728c51e105117f28b0fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:28:32.378156   10488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 07:28:32.378156   10488 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:28:32.378276   10488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:28:32.378352   10488 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-509300"
	I1228 07:28:32.378352   10488 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-509300"
	I1228 07:28:32.378352   10488 addons.go:238] Setting addon storage-provisioner=true in "running-upgrade-509300"
	I1228 07:28:32.378352   10488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-509300"
	I1228 07:28:32.378352   10488 host.go:66] Checking if "running-upgrade-509300" exists ...
	I1228 07:28:32.378352   10488 config.go:182] Loaded profile config "running-upgrade-509300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1228 07:28:32.383435   10488 out.go:177] * Verifying Kubernetes components...
	I1228 07:28:32.394845   10488 cli_runner.go:164] Run: docker container inspect running-upgrade-509300 --format={{.State.Status}}
	I1228 07:28:32.394845   10488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:28:32.395355   10488 cli_runner.go:164] Run: docker container inspect running-upgrade-509300 --format={{.State.Status}}
	I1228 07:28:32.463878   10488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> Docker <==
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497567320Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497608424Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497618625Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497624226Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497629826Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497653729Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497688732Z" level=info msg="Initializing buildkit"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.620259180Z" level=info msg="Completed buildkit initialization"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.636274093Z" level=info msg="Daemon has completed initialization"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.636499816Z" level=info msg="API listen on /run/docker.sock"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.636559822Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.636628329Z" level=info msg="API listen on [::]:2376"
	Dec 28 07:20:23 force-systemd-flag-550200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 28 07:20:24 force-systemd-flag-550200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Start docker client with request timeout 0s"
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Loaded network plugin cni"
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Setting cgroupDriver systemd"
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 28 07:20:24 force-systemd-flag-550200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:28:34.180540   10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:34.181519   10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:34.182578   10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:34.183915   10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:28:34.184878   10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.926377] CPU: 4 PID: 252615 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f37543beb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f37543beaf6.
	[  +0.000001] RSP: 002b:00007ffed12f1230 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec28 07:27] tmpfs: Unknown parameter 'noswap'
	[  +7.486264] tmpfs: Unknown parameter 'noswap'
	[  +0.657209] tmpfs: Unknown parameter 'noswap'
	[Dec28 07:28] tmpfs: Unknown parameter 'noswap'
	[  +7.727880] CPU: 12 PID: 266434 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fc4c0c23b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fc4c0c23af6.
	[  +0.000001] RSP: 002b:00007ffd4d85e4f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000004] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000002] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000005] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +0.675643] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:28:34 up  1:53,  0 user,  load average: 3.98, 3.46, 2.77
	Linux force-systemd-flag-550200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:28:31 force-systemd-flag-550200 kubelet[10370]: E1228 07:28:31.794031   10370 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:28:32 force-systemd-flag-550200 kubelet[10415]: E1228 07:28:32.555390   10415 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:28:33 force-systemd-flag-550200 kubelet[10441]: E1228 07:28:33.287179   10441 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:28:34 force-systemd-flag-550200 kubelet[10520]: E1228 07:28:34.067339   10520 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:28:34 force-systemd-flag-550200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:28:34 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-flag-550200 -n force-systemd-flag-550200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-flag-550200 -n force-systemd-flag-550200: exit status 6 (703.7292ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:28:35.089533   13432 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-550200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-550200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-550200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-550200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-550200: (2.8537129s)
--- FAIL: TestForceSystemdFlag (568.11s)

                                                
                                    
x
+
TestForceSystemdEnv (523.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-970200 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-970200 --memory=3072 --alsologtostderr -v=5 --driver=docker: exit status 109 (8m34.8280036s)

                                                
                                                
-- stdout --
	* [force-systemd-env-970200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-970200" primary control-plane node in "force-systemd-env-970200" cluster
	* Pulling base image v0.0.48-1766884053-22351 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:24:45.972063    9696 out.go:360] Setting OutFile to fd 1144 ...
	I1228 07:24:46.018075    9696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:24:46.018075    9696 out.go:374] Setting ErrFile to fd 1960...
	I1228 07:24:46.018075    9696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:24:46.032067    9696 out.go:368] Setting JSON to false
	I1228 07:24:46.036080    9696 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6625,"bootTime":1766900060,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1228 07:24:46.036080    9696 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1228 07:24:46.039061    9696 out.go:179] * [force-systemd-env-970200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1228 07:24:46.042060    9696 notify.go:221] Checking for updates...
	I1228 07:24:46.044065    9696 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 07:24:46.047074    9696 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1228 07:24:46.049059    9696 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:24:46.051076    9696 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:24:46.055071    9696 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1228 07:24:46.058069    9696 config.go:182] Loaded profile config "cert-expiration-709700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:24:46.058069    9696 config.go:182] Loaded profile config "force-systemd-flag-550200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:24:46.059072    9696 config.go:182] Loaded profile config "stopped-upgrade-550200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1228 07:24:46.059072    9696 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:24:46.166061    9696 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1228 07:24:46.169062    9696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:24:46.413801    9696 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:24:46.39590567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:24:46.418802    9696 out.go:179] * Using the docker driver based on user configuration
	I1228 07:24:46.423806    9696 start.go:309] selected driver: docker
	I1228 07:24:46.424805    9696 start.go:928] validating driver "docker" against <nil>
	I1228 07:24:46.424805    9696 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:24:46.430809    9696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:24:46.664077    9696 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:24:46.644699578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:24:46.664077    9696 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:24:46.665087    9696 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:24:46.668093    9696 out.go:179] * Using Docker Desktop driver with root privileges
	I1228 07:24:46.671102    9696 cni.go:84] Creating CNI manager for ""
	I1228 07:24:46.671102    9696 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:24:46.671102    9696 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 07:24:46.671102    9696 start.go:353] cluster config:
	{Name:force-systemd-env-970200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-970200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:24:46.677077    9696 out.go:179] * Starting "force-systemd-env-970200" primary control-plane node in "force-systemd-env-970200" cluster
	I1228 07:24:46.680077    9696 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 07:24:46.685093    9696 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:24:46.691599    9696 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:24:46.691599    9696 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:24:46.691599    9696 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1228 07:24:46.691599    9696 cache.go:65] Caching tarball of preloaded images
	I1228 07:24:46.691599    9696 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:24:46.692306    9696 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 07:24:46.692444    9696 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\config.json ...
	I1228 07:24:46.692634    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\config.json: {Name:mkf22622f5c0bc73cd133bae89afbb67752bbaf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:24:46.766209    9696 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:24:46.766209    9696 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:24:46.766209    9696 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:24:46.766209    9696 start.go:360] acquireMachinesLock for force-systemd-env-970200: {Name:mk83988c662f2c6e7fed912424b1af5b11449308 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:24:46.767210    9696 start.go:364] duration metric: took 1.0012ms to acquireMachinesLock for "force-systemd-env-970200"
	I1228 07:24:46.767210    9696 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-970200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-970200 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:24:46.767210    9696 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:24:46.772216    9696 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:24:46.772216    9696 start.go:159] libmachine.API.Create for "force-systemd-env-970200" (driver="docker")
	I1228 07:24:46.772216    9696 client.go:173] LocalClient.Create starting
	I1228 07:24:46.773214    9696 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1228 07:24:46.773214    9696 main.go:144] libmachine: Decoding PEM data...
	I1228 07:24:46.773214    9696 main.go:144] libmachine: Parsing certificate...
	I1228 07:24:46.773214    9696 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1228 07:24:46.773214    9696 main.go:144] libmachine: Decoding PEM data...
	I1228 07:24:46.773214    9696 main.go:144] libmachine: Parsing certificate...
	I1228 07:24:46.777219    9696 cli_runner.go:164] Run: docker network inspect force-systemd-env-970200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:24:46.824209    9696 cli_runner.go:211] docker network inspect force-systemd-env-970200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:24:46.828207    9696 network_create.go:284] running [docker network inspect force-systemd-env-970200] to gather additional debugging logs...
	I1228 07:24:46.828207    9696 cli_runner.go:164] Run: docker network inspect force-systemd-env-970200
	W1228 07:24:46.879004    9696 cli_runner.go:211] docker network inspect force-systemd-env-970200 returned with exit code 1
	I1228 07:24:46.879769    9696 network_create.go:287] error running [docker network inspect force-systemd-env-970200]: docker network inspect force-systemd-env-970200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-970200 not found
	I1228 07:24:46.879922    9696 network_create.go:289] output of [docker network inspect force-systemd-env-970200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-970200 not found
	
	** /stderr **
	I1228 07:24:46.883201    9696 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:24:46.962567    9696 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:24:46.977523    9696 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:24:46.993011    9696 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:24:47.005668    9696 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017677a0}
	I1228 07:24:47.005668    9696 network_create.go:124] attempt to create docker network force-systemd-env-970200 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1228 07:24:47.008621    9696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-970200 force-systemd-env-970200
	I1228 07:24:47.508528    9696 network_create.go:108] docker network force-systemd-env-970200 192.168.76.0/24 created
	I1228 07:24:47.508592    9696 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-970200" container
	I1228 07:24:47.521435    9696 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:24:47.579828    9696 cli_runner.go:164] Run: docker volume create force-systemd-env-970200 --label name.minikube.sigs.k8s.io=force-systemd-env-970200 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:24:47.635894    9696 oci.go:103] Successfully created a docker volume force-systemd-env-970200
	I1228 07:24:47.642070    9696 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-970200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-970200 --entrypoint /usr/bin/test -v force-systemd-env-970200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:24:48.922497    9696 cli_runner.go:217] Completed: docker run --rm --name force-systemd-env-970200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-970200 --entrypoint /usr/bin/test -v force-systemd-env-970200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib: (1.2804071s)
	I1228 07:24:48.922497    9696 oci.go:107] Successfully prepared a docker volume force-systemd-env-970200
	I1228 07:24:48.922497    9696 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:24:48.922497    9696 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:24:48.926351    9696 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-970200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:25:02.951597    9696 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-970200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (14.0250308s)
	I1228 07:25:02.951597    9696 kic.go:203] duration metric: took 14.028885s to extract preloaded images to volume ...
	I1228 07:25:02.960328    9696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:25:03.201761    9696 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:25:03.184304291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:25:03.204760    9696 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:25:03.467023    9696 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-970200 --name force-systemd-env-970200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-970200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-970200 --network force-systemd-env-970200 --ip 192.168.76.2 --volume force-systemd-env-970200:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:25:04.136050    9696 cli_runner.go:164] Run: docker container inspect force-systemd-env-970200 --format={{.State.Running}}
	I1228 07:25:04.198181    9696 cli_runner.go:164] Run: docker container inspect force-systemd-env-970200 --format={{.State.Status}}
	I1228 07:25:04.255201    9696 cli_runner.go:164] Run: docker exec force-systemd-env-970200 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:25:04.362569    9696 oci.go:144] the created container "force-systemd-env-970200" has a running status.
	I1228 07:25:04.362569    9696 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa...
	I1228 07:25:04.427171    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:25:04.440421    9696 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:25:04.519362    9696 cli_runner.go:164] Run: docker container inspect force-systemd-env-970200 --format={{.State.Status}}
	I1228 07:25:04.578373    9696 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:25:04.578373    9696 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-970200 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:25:04.701588    9696 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa...
	I1228 07:25:06.866568    9696 cli_runner.go:164] Run: docker container inspect force-systemd-env-970200 --format={{.State.Status}}
	I1228 07:25:06.918070    9696 machine.go:94] provisionDockerMachine start ...
	I1228 07:25:06.921075    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:06.977073    9696 main.go:144] libmachine: Using SSH client type: native
	I1228 07:25:06.994261    9696 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 55469 <nil> <nil>}
	I1228 07:25:06.994788    9696 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:25:07.160085    9696 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-970200
	
	I1228 07:25:07.160136    9696 ubuntu.go:182] provisioning hostname "force-systemd-env-970200"
	I1228 07:25:07.163563    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:07.216981    9696 main.go:144] libmachine: Using SSH client type: native
	I1228 07:25:07.216981    9696 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 55469 <nil> <nil>}
	I1228 07:25:07.216981    9696 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-970200 && echo "force-systemd-env-970200" | sudo tee /etc/hostname
	I1228 07:25:07.399383    9696 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-970200
	
	I1228 07:25:07.406189    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:07.464192    9696 main.go:144] libmachine: Using SSH client type: native
	I1228 07:25:07.464192    9696 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 55469 <nil> <nil>}
	I1228 07:25:07.464712    9696 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-970200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-970200/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-970200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:25:07.641039    9696 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:25:07.641039    9696 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1228 07:25:07.641179    9696 ubuntu.go:190] setting up certificates
	I1228 07:25:07.641179    9696 provision.go:84] configureAuth start
	I1228 07:25:07.645059    9696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-970200
	I1228 07:25:07.698448    9696 provision.go:143] copyHostCerts
	I1228 07:25:07.698448    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1228 07:25:07.698448    9696 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1228 07:25:07.698448    9696 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1228 07:25:07.699179    9696 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1228 07:25:07.699888    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1228 07:25:07.699888    9696 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1228 07:25:07.699888    9696 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1228 07:25:07.700502    9696 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1228 07:25:07.701210    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1228 07:25:07.701301    9696 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1228 07:25:07.701301    9696 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1228 07:25:07.701301    9696 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1228 07:25:07.702425    9696 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-env-970200 san=[127.0.0.1 192.168.76.2 force-systemd-env-970200 localhost minikube]
	I1228 07:25:07.832444    9696 provision.go:177] copyRemoteCerts
	I1228 07:25:07.836220    9696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:25:07.839262    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:07.894959    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55469 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa Username:docker}
	I1228 07:25:08.014909    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1228 07:25:08.015231    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1228 07:25:08.047605    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1228 07:25:08.047973    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:25:08.077840    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1228 07:25:08.077877    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:25:08.109892    9696 provision.go:87] duration metric: took 468.6709ms to configureAuth
	I1228 07:25:08.109892    9696 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:25:08.111099    9696 config.go:182] Loaded profile config "force-systemd-env-970200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:25:08.114788    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:08.170841    9696 main.go:144] libmachine: Using SSH client type: native
	I1228 07:25:08.171482    9696 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 55469 <nil> <nil>}
	I1228 07:25:08.171538    9696 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1228 07:25:08.339943    9696 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1228 07:25:08.340643    9696 ubuntu.go:71] root file system type: overlay
	I1228 07:25:08.340643    9696 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1228 07:25:08.344204    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:08.399933    9696 main.go:144] libmachine: Using SSH client type: native
	I1228 07:25:08.400025    9696 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 55469 <nil> <nil>}
	I1228 07:25:08.400025    9696 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1228 07:25:08.594583    9696 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1228 07:25:08.598206    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:08.654277    9696 main.go:144] libmachine: Using SSH client type: native
	I1228 07:25:08.654277    9696 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 55469 <nil> <nil>}
	I1228 07:25:08.654277    9696 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1228 07:25:10.196103    9696 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-28 07:25:08.584766910 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1228 07:25:10.196630    9696 machine.go:97] duration metric: took 3.2785092s to provisionDockerMachine
	I1228 07:25:10.196686    9696 client.go:176] duration metric: took 23.4241113s to LocalClient.Create
	I1228 07:25:10.196686    9696 start.go:167] duration metric: took 23.4241113s to libmachine.API.Create "force-systemd-env-970200"
	I1228 07:25:10.196734    9696 start.go:293] postStartSetup for "force-systemd-env-970200" (driver="docker")
	I1228 07:25:10.196734    9696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:25:10.201731    9696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:25:10.204880    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:10.257078    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55469 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa Username:docker}
	I1228 07:25:10.388301    9696 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:25:10.395080    9696 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:25:10.395080    9696 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:25:10.395080    9696 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1228 07:25:10.395603    9696 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1228 07:25:10.395638    9696 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> 135562.pem in /etc/ssl/certs
	I1228 07:25:10.395638    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> /etc/ssl/certs/135562.pem
	I1228 07:25:10.401101    9696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:25:10.413188    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem --> /etc/ssl/certs/135562.pem (1708 bytes)
	I1228 07:25:10.442872    9696 start.go:296] duration metric: took 246.1337ms for postStartSetup
	I1228 07:25:10.448756    9696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-970200
	I1228 07:25:10.505383    9696 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\config.json ...
	I1228 07:25:10.513913    9696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:25:10.516906    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:10.572252    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55469 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa Username:docker}
	I1228 07:25:10.696710    9696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:25:10.705424    9696 start.go:128] duration metric: took 23.9378474s to createHost
	I1228 07:25:10.705472    9696 start.go:83] releasing machines lock for "force-systemd-env-970200", held for 23.9378954s
	I1228 07:25:10.709366    9696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-970200
	I1228 07:25:10.762799    9696 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1228 07:25:10.766793    9696 ssh_runner.go:195] Run: cat /version.json
	I1228 07:25:10.766793    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:10.769793    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:10.822138    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55469 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa Username:docker}
	I1228 07:25:10.823128    9696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55469 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-env-970200\id_rsa Username:docker}
	W1228 07:25:10.939422    9696 start.go:879] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1228 07:25:10.944437    9696 ssh_runner.go:195] Run: systemctl --version
	I1228 07:25:10.958450    9696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:25:10.967764    9696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:25:10.972118    9696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:25:11.029785    9696 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 07:25:11.029850    9696 start.go:496] detecting cgroup driver to use...
	I1228 07:25:11.029850    9696 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:25:11.029850    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1228 07:25:11.031353    9696 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1228 07:25:11.031353    9696 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1228 07:25:11.061028    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:25:11.083276    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:25:11.099924    9696 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:25:11.104208    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:25:11.126255    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:25:11.144795    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:25:11.163662    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:25:11.181759    9696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:25:11.198664    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:25:11.216909    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:25:11.235966    9696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:25:11.257891    9696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:25:11.274514    9696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:25:11.293410    9696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:25:11.426300    9696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:25:11.589889    9696 start.go:496] detecting cgroup driver to use...
	I1228 07:25:11.590415    9696 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:25:11.594690    9696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1228 07:25:11.619883    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:25:11.642279    9696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 07:25:11.723613    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:25:11.745982    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:25:11.765144    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:25:11.792315    9696 ssh_runner.go:195] Run: which cri-dockerd
	I1228 07:25:11.803166    9696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1228 07:25:11.818418    9696 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1228 07:25:11.844830    9696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1228 07:25:11.985555    9696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1228 07:25:12.126033    9696 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1228 07:25:12.126560    9696 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1228 07:25:12.154652    9696 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1228 07:25:12.176650    9696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:25:12.339736    9696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1228 07:25:13.347606    9696 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.007854s)
	I1228 07:25:13.352786    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:25:13.374675    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1228 07:25:13.397650    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:25:13.421135    9696 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1228 07:25:13.589381    9696 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1228 07:25:13.731092    9696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:25:13.877638    9696 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1228 07:25:13.903930    9696 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1228 07:25:13.924610    9696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:25:14.069551    9696 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1228 07:25:14.177383    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:25:14.196288    9696 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1228 07:25:14.200943    9696 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1228 07:25:14.209191    9696 start.go:574] Will wait 60s for crictl version
	I1228 07:25:14.213369    9696 ssh_runner.go:195] Run: which crictl
	I1228 07:25:14.225187    9696 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:25:14.265205    9696 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1228 07:25:14.270329    9696 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:25:14.311349    9696 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:25:14.352172    9696 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1228 07:25:14.355317    9696 cli_runner.go:164] Run: docker exec -t force-systemd-env-970200 dig +short host.docker.internal
	I1228 07:25:14.491762    9696 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1228 07:25:14.498056    9696 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1228 07:25:14.506451    9696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:25:14.527870    9696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-970200
	I1228 07:25:14.585498    9696 kubeadm.go:884] updating cluster {Name:force-systemd-env-970200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-970200 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:25:14.585498    9696 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:25:14.589851    9696 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:25:14.621935    9696 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:25:14.621935    9696 docker.go:624] Images already preloaded, skipping extraction
	I1228 07:25:14.625015    9696 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:25:14.653871    9696 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:25:14.653920    9696 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:25:14.653967    9696 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I1228 07:25:14.654093    9696 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-970200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-970200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:25:14.658046    9696 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1228 07:25:14.731301    9696 cni.go:84] Creating CNI manager for ""
	I1228 07:25:14.731301    9696 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:25:14.731301    9696 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:25:14.731301    9696 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-970200 NodeName:force-systemd-env-970200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:25:14.731301    9696 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-970200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:25:14.735181    9696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:25:14.749388    9696 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:25:14.753451    9696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:25:14.766755    9696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1228 07:25:14.788234    9696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:25:14.810494    9696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1228 07:25:14.834461    9696 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:25:14.843008    9696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:25:14.862588    9696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:25:14.999077    9696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:25:15.021415    9696 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200 for IP: 192.168.76.2
	I1228 07:25:15.021415    9696 certs.go:195] generating shared ca certs ...
	I1228 07:25:15.021415    9696 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:25:15.022976    9696 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1228 07:25:15.022976    9696 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1228 07:25:15.022976    9696 certs.go:257] generating profile certs ...
	I1228 07:25:15.023678    9696 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\client.key
	I1228 07:25:15.023806    9696 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\client.crt with IP's: []
	I1228 07:25:15.104778    9696 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\client.crt ...
	I1228 07:25:15.104778    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\client.crt: {Name:mkebc07df88ca488872480e1568ae06bfc5179f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:25:15.106214    9696 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\client.key ...
	I1228 07:25:15.106214    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\client.key: {Name:mke000ac33845d1bec16017ccec4e0a175530a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:25:15.107425    9696 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.key.a2d3a645
	I1228 07:25:15.107425    9696 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.crt.a2d3a645 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1228 07:25:15.180532    9696 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.crt.a2d3a645 ...
	I1228 07:25:15.180532    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.crt.a2d3a645: {Name:mk72846f7d569d207aceb41b0c36063b2daac959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:25:15.181284    9696 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.key.a2d3a645 ...
	I1228 07:25:15.181284    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.key.a2d3a645: {Name:mka141cb639ef91fb9d432b5b764982a9a4f85bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:25:15.182770    9696 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.crt.a2d3a645 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.crt
	I1228 07:25:15.196430    9696 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.key.a2d3a645 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.key
	I1228 07:25:15.197252    9696 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.key
	I1228 07:25:15.197252    9696 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.crt with IP's: []
	I1228 07:25:15.232655    9696 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.crt ...
	I1228 07:25:15.232655    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.crt: {Name:mkf489fbe801ac33b31789588a4d037778336919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:25:15.232943    9696 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.key ...
	I1228 07:25:15.232943    9696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.key: {Name:mk5b1c931212dcad84fa532d9d2fc85573133a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:25:15.233875    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:25:15.234579    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:25:15.234682    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:25:15.234770    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:25:15.234866    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:25:15.234963    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:25:15.235077    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:25:15.247788    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:25:15.248127    9696 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem (1338 bytes)
	W1228 07:25:15.248522    9696 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556_empty.pem, impossibly tiny 0 bytes
	I1228 07:25:15.248522    9696 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1228 07:25:15.248869    9696 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1228 07:25:15.249077    9696 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1228 07:25:15.249242    9696 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1228 07:25:15.249384    9696 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem (1708 bytes)
	I1228 07:25:15.249780    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> /usr/share/ca-certificates/135562.pem
	I1228 07:25:15.249780    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:25:15.249976    9696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem -> /usr/share/ca-certificates/13556.pem
	I1228 07:25:15.250112    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:25:15.283784    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:25:15.310208    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:25:15.338688    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:25:15.365929    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:25:15.391545    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:25:15.419558    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:25:15.448962    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-env-970200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 07:25:15.476017    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem --> /usr/share/ca-certificates/135562.pem (1708 bytes)
	I1228 07:25:15.506344    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:25:15.535351    9696 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem --> /usr/share/ca-certificates/13556.pem (1338 bytes)
	I1228 07:25:15.566528    9696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:25:15.592358    9696 ssh_runner.go:195] Run: openssl version
	I1228 07:25:15.608284    9696 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:25:15.626236    9696 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:25:15.644982    9696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:25:15.654341    9696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:25:15.658849    9696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:25:15.705394    9696 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:25:15.723518    9696 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:25:15.744011    9696 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13556.pem
	I1228 07:25:15.762131    9696 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13556.pem /etc/ssl/certs/13556.pem
	I1228 07:25:15.780829    9696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13556.pem
	I1228 07:25:15.789050    9696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:37 /usr/share/ca-certificates/13556.pem
	I1228 07:25:15.793533    9696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13556.pem
	I1228 07:25:15.840362    9696 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:25:15.859819    9696 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13556.pem /etc/ssl/certs/51391683.0
	I1228 07:25:15.877368    9696 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135562.pem
	I1228 07:25:15.895613    9696 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135562.pem /etc/ssl/certs/135562.pem
	I1228 07:25:15.915861    9696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135562.pem
	I1228 07:25:15.926276    9696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:37 /usr/share/ca-certificates/135562.pem
	I1228 07:25:15.931476    9696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135562.pem
	I1228 07:25:15.979519    9696 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:25:15.996851    9696 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/135562.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:25:16.016826    9696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:25:16.025294    9696 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:25:16.025356    9696 kubeadm.go:401] StartCluster: {Name:force-systemd-env-970200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-970200 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:25:16.029730    9696 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1228 07:25:16.062957    9696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:25:16.078958    9696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:25:16.093963    9696 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:25:16.098826    9696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:25:16.112216    9696 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:25:16.112260    9696 kubeadm.go:158] found existing configuration files:
	
	I1228 07:25:16.116381    9696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:25:16.131407    9696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:25:16.135396    9696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:25:16.153978    9696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:25:16.168086    9696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:25:16.172520    9696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:25:16.189911    9696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:25:16.204681    9696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:25:16.209309    9696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:25:16.228554    9696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:25:16.243156    9696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:25:16.247745    9696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:25:16.264965    9696 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:25:16.379195    9696 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1228 07:25:16.463138    9696 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:25:16.558944    9696 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:29:18.286374    9696 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:29:18.286374    9696 kubeadm.go:319] 
	I1228 07:29:18.286374    9696 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:29:18.290436    9696 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:29:18.290436    9696 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:29:18.291139    9696 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:29:18.291183    9696 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1228 07:29:18.291183    9696 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1228 07:29:18.291183    9696 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1228 07:29:18.291183    9696 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1228 07:29:18.291900    9696 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1228 07:29:18.292058    9696 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1228 07:29:18.292173    9696 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1228 07:29:18.292376    9696 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1228 07:29:18.292528    9696 kubeadm.go:319] CONFIG_INET: enabled
	I1228 07:29:18.292673    9696 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1228 07:29:18.292876    9696 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1228 07:29:18.293003    9696 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1228 07:29:18.293003    9696 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1228 07:29:18.293003    9696 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1228 07:29:18.293003    9696 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1228 07:29:18.293003    9696 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1228 07:29:18.296489    9696 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1228 07:29:18.296489    9696 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1228 07:29:18.296489    9696 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1228 07:29:18.296489    9696 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1228 07:29:18.297058    9696 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1228 07:29:18.297184    9696 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1228 07:29:18.297184    9696 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1228 07:29:18.297184    9696 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1228 07:29:18.297184    9696 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1228 07:29:18.297184    9696 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1228 07:29:18.297184    9696 kubeadm.go:319] OS: Linux
	I1228 07:29:18.297706    9696 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:29:18.297810    9696 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:29:18.297810    9696 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:29:18.297810    9696 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:29:18.297810    9696 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:29:18.297810    9696 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:29:18.297810    9696 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:29:18.298368    9696 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:29:18.298368    9696 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:29:18.298368    9696 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:29:18.298368    9696 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:29:18.298368    9696 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:29:18.298368    9696 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:29:18.301967    9696 out.go:252]   - Generating certificates and keys ...
	I1228 07:29:18.301967    9696 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:29:18.301967    9696 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:29:18.302594    9696 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:29:18.302594    9696 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:29:18.302594    9696 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:29:18.302594    9696 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:29:18.302594    9696 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:29:18.303180    9696 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-970200 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:29:18.303180    9696 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:29:18.303180    9696 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-970200 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:29:18.303779    9696 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:29:18.303779    9696 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:29:18.303779    9696 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:29:18.303779    9696 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:29:18.303779    9696 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:29:18.303779    9696 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:29:18.303779    9696 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:29:18.303779    9696 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:29:18.303779    9696 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:29:18.304772    9696 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:29:18.304772    9696 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:29:18.307765    9696 out.go:252]   - Booting up control plane ...
	I1228 07:29:18.307765    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:29:18.307765    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:29:18.307765    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:29:18.308765    9696 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:29:18.308765    9696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:29:18.308765    9696 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:29:18.308765    9696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:29:18.308765    9696 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:29:18.308765    9696 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:29:18.309765    9696 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:29:18.309765    9696 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000221903s
	I1228 07:29:18.309765    9696 kubeadm.go:319] 
	I1228 07:29:18.309765    9696 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:29:18.309765    9696 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:29:18.309765    9696 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:29:18.309765    9696 kubeadm.go:319] 
	I1228 07:29:18.309765    9696 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:29:18.309765    9696 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:29:18.309765    9696 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:29:18.309765    9696 kubeadm.go:319] 
	W1228 07:29:18.310763    9696 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-970200 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-970200 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000221903s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-970200 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-970200 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000221903s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:29:18.314765    9696 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1228 07:29:18.774485    9696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:29:18.792072    9696 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:29:18.797902    9696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:29:18.811497    9696 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:29:18.811497    9696 kubeadm.go:158] found existing configuration files:
	
	I1228 07:29:18.818786    9696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:29:18.834159    9696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:29:18.839435    9696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:29:18.860381    9696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:29:18.874367    9696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:29:18.878361    9696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:29:18.893361    9696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:29:18.905370    9696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:29:18.908361    9696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:29:18.931477    9696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:29:18.946467    9696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:29:18.950457    9696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:29:18.966463    9696 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:29:19.093661    9696 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1228 07:29:19.197006    9696 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:29:19.304363    9696 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:33:20.135976    9696 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:33:20.136053    9696 kubeadm.go:319] 
	I1228 07:33:20.136251    9696 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:33:20.140165    9696 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:33:20.140165    9696 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:33:20.140790    9696 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:33:20.141205    9696 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1228 07:33:20.141418    9696 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1228 07:33:20.141589    9696 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1228 07:33:20.141692    9696 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1228 07:33:20.141848    9696 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1228 07:33:20.141934    9696 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1228 07:33:20.142089    9696 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1228 07:33:20.142304    9696 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1228 07:33:20.142481    9696 kubeadm.go:319] CONFIG_INET: enabled
	I1228 07:33:20.142692    9696 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1228 07:33:20.142882    9696 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1228 07:33:20.143162    9696 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1228 07:33:20.143358    9696 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] OS: Linux
	I1228 07:33:20.143723    9696 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:33:20.145178    9696 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:33:20.145323    9696 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:33:20.145323    9696 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:33:20.145516    9696 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:33:20.145650    9696 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:33:20.145725    9696 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:33:20.145808    9696 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:33:20.145913    9696 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:33:20.146098    9696 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:33:20.146347    9696 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:33:20.146347    9696 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:33:20.146347    9696 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:33:20.150327    9696 out.go:252]   - Generating certificates and keys ...
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:33:20.151389    9696 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:33:20.151389    9696 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:33:20.151389    9696 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:33:20.151389    9696 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:33:20.151931    9696 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:33:20.152592    9696 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:33:20.152592    9696 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:33:20.155905    9696 out.go:252]   - Booting up control plane ...
	I1228 07:33:20.155905    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:33:20.155905    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:33:20.156471    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:33:20.156471    9696 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:33:20.156471    9696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:33:20.157067    9696 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:33:20.157067    9696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:33:20.157067    9696 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:33:20.157067    9696 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:33:20.157067    9696 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:33:20.157067    9696 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001192s
	I1228 07:33:20.157067    9696 kubeadm.go:319] 
	I1228 07:33:20.157067    9696 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:33:20.157067    9696 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:33:20.158037    9696 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:33:20.158037    9696 kubeadm.go:319] 
	I1228 07:33:20.158037    9696 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:33:20.158037    9696 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:33:20.158037    9696 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:33:20.158037    9696 kubeadm.go:319] 
	I1228 07:33:20.158037    9696 kubeadm.go:403] duration metric: took 8m4.1250829s to StartCluster
	I1228 07:33:20.161461    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.184099    9696 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.188880    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.207224    9696 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.211101    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.230703    9696 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.236066    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.254817    9696 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.259912    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.285037    9696 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.290304    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.311093    9696 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.316667    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.336663    9696 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.336663    9696 logs.go:123] Gathering logs for kubelet ...
	I1228 07:33:20.336715    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:33:20.412171    9696 logs.go:123] Gathering logs for dmesg ...
	I1228 07:33:20.412171    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:33:20.466690    9696 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:33:20.466690    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:33:20.553679    9696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:33:20.542100   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.543002   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.545649   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.547998   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.548784   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:33:20.542100   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.543002   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.545649   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.547998   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.548784   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:33:20.553679    9696 logs.go:123] Gathering logs for Docker ...
	I1228 07:33:20.553679    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:33:20.595325    9696 logs.go:123] Gathering logs for container status ...
	I1228 07:33:20.595325    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1228 07:33:20.661334    9696 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001192s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:33:20.662333    9696 out.go:285] * 
	* 
	W1228 07:33:20.662333    9696 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001192s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001192s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:33:20.662333    9696 out.go:285] * 
	* 
	W1228 07:33:20.662333    9696 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:33:20.669336    9696 out.go:203] 
	W1228 07:33:20.674317    9696 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001192s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001192s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:33:20.674317    9696 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:33:20.674317    9696 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:33:20.685326    9696 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-970200 --memory=3072 --alsologtostderr -v=5 --driver=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-970200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-28 07:33:21.7578879 +0000 UTC m=+3919.263768401
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-970200
helpers_test.go:244: (dbg) docker inspect force-systemd-env-970200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b76351289e9ccafefc5afa45d4c0df8512e8e84e08fad09e83a066d73cc59ca",
	        "Created": "2025-12-28T07:25:03.522447578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232888,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:25:03.81257355Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/4b76351289e9ccafefc5afa45d4c0df8512e8e84e08fad09e83a066d73cc59ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b76351289e9ccafefc5afa45d4c0df8512e8e84e08fad09e83a066d73cc59ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b76351289e9ccafefc5afa45d4c0df8512e8e84e08fad09e83a066d73cc59ca/hosts",
	        "LogPath": "/var/lib/docker/containers/4b76351289e9ccafefc5afa45d4c0df8512e8e84e08fad09e83a066d73cc59ca/4b76351289e9ccafefc5afa45d4c0df8512e8e84e08fad09e83a066d73cc59ca-json.log",
	        "Name": "/force-systemd-env-970200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "force-systemd-env-970200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-970200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/940a5267c6fec264f7e91e8badc17e700164de0da8832ee50c85f26a40e190c4-init/diff:/var/lib/docker/overlay2/755790e5dd4d70e5001883ef2a2cf79adb7d5054e85cb9aeffa64c965a5cf81c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/940a5267c6fec264f7e91e8badc17e700164de0da8832ee50c85f26a40e190c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/940a5267c6fec264f7e91e8badc17e700164de0da8832ee50c85f26a40e190c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/940a5267c6fec264f7e91e8badc17e700164de0da8832ee50c85f26a40e190c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-970200",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-970200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-970200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-970200",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-970200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "41da7dc9bf51a42af7d8e9a9d82943c9f4278fa75356ffc484338db4e084fa09",
	            "SandboxKey": "/var/run/docker/netns/41da7dc9bf51",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55470"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55471"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55472"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55473"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-970200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8ed4bda830d20ed837b2dfcdd3f17177f2e6df1f43045db9b7934266a794f63e",
	                    "EndpointID": "4308b638e2d45067aba236411e2bffba643deb875dc8420c6875ca72ee49ef2c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-970200",
	                        "4b76351289e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-970200 -n force-systemd-env-970200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-970200 -n force-systemd-env-970200: exit status 6 (622.9981ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:33:22.400898    9376 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-970200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-970200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-970200 logs -n 25: (1.367423s)
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │         PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-410600 sudo iptables -t nat -L -n -v                                 │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /run/flannel/subnet.env                              │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /etc/kube-flannel/cni-conf.json                      │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl status kubelet --all --full --no-pager         │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl cat kubelet --no-pager                         │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo journalctl -xeu kubelet --all --full --no-pager          │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /etc/kubernetes/kubelet.conf                         │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl cat docker --no-pager                          │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /etc/docker/daemon.json                              │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo docker system info                                       │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cri-dockerd --version                                    │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl cat containerd --no-pager                      │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /lib/systemd/system/containerd.service               │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo cat /etc/containerd/config.toml                          │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo containerd config dump                                   │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ force-systemd-env-970200 ssh docker info --format {{.CgroupDriver}}                    │ force-systemd-env-970200 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl status crio --all --full --no-pager            │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-410600 sudo systemctl cat crio --no-pager                            │ custom-flannel-410600    │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:33 UTC │ 28 Dec 25 07:33 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	I1228 07:31:34.192205    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:31:34.194977    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:31:34.198725    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:34.231276    9412 logs.go:282] 2 containers: [7b42c98a4262 bfa8dc267780]
	I1228 07:31:34.235151    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:34.266536    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:31:34.270506    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:34.305459    9412 logs.go:282] 0 containers: []
	W1228 07:31:34.305459    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:31:34.312554    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:34.343491    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:31:34.346483    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:34.379485    9412 logs.go:282] 0 containers: []
	W1228 07:31:34.379485    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:31:34.382485    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:34.417436    9412 logs.go:282] 2 containers: [3f64f9a54844 67014a6dfb79]
	I1228 07:31:34.422095    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:34.458789    9412 logs.go:282] 0 containers: []
	W1228 07:31:34.458789    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:34.462789    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:34.493904    9412 logs.go:282] 0 containers: []
	W1228 07:31:34.493904    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:31:34.493904    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:34.493904    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:34.574922    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:34.574922    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:34.614915    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:31:34.614915    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:31:34.665686    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:31:34.665686    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:31:34.710706    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:31:34.710706    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:31:34.762896    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:34.762896    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:34.801676    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:31:34.801676    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:34.868840    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:34.868840    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:34.963252    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:34.963252    9412 logs.go:123] Gathering logs for kube-apiserver [7b42c98a4262] ...
	I1228 07:31:34.963252    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b42c98a4262"
	I1228 07:31:35.027062    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:31:35.028062    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:31:35.125181    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:31:35.125181    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:31:35.203715    9412 logs.go:123] Gathering logs for kube-controller-manager [3f64f9a54844] ...
	I1228 07:31:35.203715    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64f9a54844"
	I1228 07:31:37.746172    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:31:37.749733    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:31:37.753270    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:37.783977    9412 logs.go:282] 2 containers: [7b42c98a4262 bfa8dc267780]
	I1228 07:31:37.787320    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:37.816532    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:31:37.820056    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:37.849405    9412 logs.go:282] 0 containers: []
	W1228 07:31:37.849454    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:31:37.853416    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:37.889910    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:31:37.894265    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:37.931355    9412 logs.go:282] 0 containers: []
	W1228 07:31:37.931355    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:31:37.934639    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:37.963632    9412 logs.go:282] 3 containers: [8169474521a1 3f64f9a54844 67014a6dfb79]
	I1228 07:31:37.966631    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:37.995631    9412 logs.go:282] 0 containers: []
	W1228 07:31:37.995631    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:37.998629    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:38.036152    9412 logs.go:282] 0 containers: []
	W1228 07:31:38.036152    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:31:38.036152    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:31:38.036152    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:31:38.081139    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:31:38.081139    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:31:38.113899    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:31:38.113899    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:31:38.155133    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:31:38.155133    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:31:38.196740    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:38.196740    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:38.456178    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:38.456178    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:38.492180    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:31:38.492180    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:31:38.537169    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:31:38.537169    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:31:38.577607    9412 logs.go:123] Gathering logs for kube-controller-manager [3f64f9a54844] ...
	I1228 07:31:38.577607    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64f9a54844"
	I1228 07:31:38.610854    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:31:38.610854    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:38.732468    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:38.732468    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:38.805786    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:38.805786    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:38.907849    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:38.907849    9412 logs.go:123] Gathering logs for kube-apiserver [7b42c98a4262] ...
	I1228 07:31:38.907849    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b42c98a4262"
	Log file created at: 2025/12/28 07:31:39
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:31:39.074483    3796 out.go:360] Setting OutFile to fd 576 ...
	I1228 07:31:39.131923    3796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:31:39.131923    3796 out.go:374] Setting ErrFile to fd 1980...
	I1228 07:31:39.131923    3796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:31:39.148064    3796 out.go:368] Setting JSON to false
	I1228 07:31:39.150235    3796 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7038,"bootTime":1766900060,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1228 07:31:39.150235    3796 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1228 07:31:39.156271    3796 out.go:179] * [custom-flannel-410600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1228 07:31:39.159573    3796 notify.go:221] Checking for updates...
	I1228 07:31:39.159573    3796 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 07:31:39.163074    3796 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:31:39.164705    3796 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1228 07:31:39.166703    3796 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:31:39.169719    3796 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:31:35.692370   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:31:35.695158   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:31:35.699070   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:35.732548   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:31:35.736655   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:35.766951   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:31:35.772004   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:35.803178   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:31:35.806852   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:35.852945   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:31:35.856631   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:35.891534   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:31:35.896322   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:35.931981   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:31:35.935572   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:35.966995   10604 logs.go:282] 0 containers: []
	W1228 07:31:35.966995   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:35.970583   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:36.006008   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:31:36.006279   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:36.006337   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:36.066769   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:31:36.066769   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:36.134875   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:36.134875   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:36.173234   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:36.173234   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:36.265065   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:36.265065   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:31:36.265065   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:31:36.313177   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:31:36.313263   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:31:36.357499   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:31:36.357499   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:31:36.391006   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:31:36.391030   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:31:36.429926   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:31:36.430001   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:31:36.463808   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:31:36.463808   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:31:36.503754   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:36.503891   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:36.607555   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:31:36.607555   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:31:36.681988   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:31:36.681988   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:31:39.220398   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:31:39.224921   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:31:39.229833   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:39.172257    3796 config.go:182] Loaded profile config "force-systemd-env-970200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:31:39.172915    3796 config.go:182] Loaded profile config "kubernetes-upgrade-365300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:31:39.173147    3796 config.go:182] Loaded profile config "running-upgrade-509300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1228 07:31:39.173147    3796 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:31:39.293418    3796 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1228 07:31:39.296895    3796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:31:39.536860    3796 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:31:39.512522316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:31:39.541283    3796 out.go:179] * Using the docker driver based on user configuration
	I1228 07:31:39.546074    3796 start.go:309] selected driver: docker
	I1228 07:31:39.546105    3796 start.go:928] validating driver "docker" against <nil>
	I1228 07:31:39.546137    3796 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:31:39.555409    3796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:31:39.804462    3796 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:31:39.784228255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:31:39.805464    3796 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:31:39.805464    3796 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:31:39.808472    3796 out.go:179] * Using Docker Desktop driver with root privileges
	I1228 07:31:39.812462    3796 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1228 07:31:39.812462    3796 start_flags.go:342] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1228 07:31:39.812462    3796 start.go:353] cluster config:
	{Name:custom-flannel-410600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:custom-flannel-410600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:31:39.816466    3796 out.go:179] * Starting "custom-flannel-410600" primary control-plane node in "custom-flannel-410600" cluster
	I1228 07:31:39.818461    3796 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 07:31:39.820462    3796 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:31:39.823476    3796 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:31:39.823476    3796 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:31:39.824468    3796 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1228 07:31:39.824468    3796 cache.go:65] Caching tarball of preloaded images
	I1228 07:31:39.824468    3796 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1228 07:31:39.824468    3796 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 07:31:39.824468    3796 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\config.json ...
	I1228 07:31:39.824468    3796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\config.json: {Name:mk6fa29fd13c6790d60542df6b94b4fbbf3a4896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:31:39.900224    3796 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:31:39.900224    3796 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:31:39.900224    3796 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:31:39.900224    3796 start.go:360] acquireMachinesLock for custom-flannel-410600: {Name:mkf4e3c81c8d49832d51ec0dffe851135aebaf84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:31:39.901233    3796 start.go:364] duration metric: took 0s to acquireMachinesLock for "custom-flannel-410600"
	I1228 07:31:39.901233    3796 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-410600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:custom-flannel-410600 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:31:39.901233    3796 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:31:41.447500    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:31:41.450801    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:31:41.456240    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:41.496391    9412 logs.go:282] 2 containers: [7b42c98a4262 bfa8dc267780]
	I1228 07:31:41.499900    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:41.531243    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:31:41.537063    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:41.565800    9412 logs.go:282] 0 containers: []
	W1228 07:31:41.565800    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:31:41.568786    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:41.598548    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:31:41.602395    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:41.632031    9412 logs.go:282] 0 containers: []
	W1228 07:31:41.632031    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:31:41.635184    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:41.666649    9412 logs.go:282] 3 containers: [8169474521a1 3f64f9a54844 67014a6dfb79]
	I1228 07:31:41.669679    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:41.699210    9412 logs.go:282] 0 containers: []
	W1228 07:31:41.699290    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:41.702682    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:41.730586    9412 logs.go:282] 0 containers: []
	W1228 07:31:41.730586    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:31:41.730586    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:31:41.730586    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:31:41.779638    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:31:41.779699    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:41.857301    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:41.857301    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:41.931649    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:41.931699    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:42.040700    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:42.040700    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:31:42.040700    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:31:42.092816    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:31:42.093794    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:31:42.136344    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:31:42.136344    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:31:42.185055    9412 logs.go:123] Gathering logs for kube-controller-manager [3f64f9a54844] ...
	I1228 07:31:42.185055    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64f9a54844"
	I1228 07:31:42.221927    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:31:42.221927    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:31:42.269034    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:42.269034    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:42.306595    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:42.306595    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:42.346655    9412 logs.go:123] Gathering logs for kube-apiserver [7b42c98a4262] ...
	I1228 07:31:42.346655    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b42c98a4262"
	I1228 07:31:42.387297    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:31:42.387297    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:31:39.904215    3796 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:31:39.904215    3796 start.go:159] libmachine.API.Create for "custom-flannel-410600" (driver="docker")
	I1228 07:31:39.904215    3796 client.go:173] LocalClient.Create starting
	I1228 07:31:39.905232    3796 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1228 07:31:39.905232    3796 main.go:144] libmachine: Decoding PEM data...
	I1228 07:31:39.905232    3796 main.go:144] libmachine: Parsing certificate...
	I1228 07:31:39.905232    3796 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1228 07:31:39.905232    3796 main.go:144] libmachine: Decoding PEM data...
	I1228 07:31:39.905232    3796 main.go:144] libmachine: Parsing certificate...
	I1228 07:31:39.910222    3796 cli_runner.go:164] Run: docker network inspect custom-flannel-410600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:31:39.962427    3796 cli_runner.go:211] docker network inspect custom-flannel-410600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:31:39.968341    3796 network_create.go:284] running [docker network inspect custom-flannel-410600] to gather additional debugging logs...
	I1228 07:31:39.968341    3796 cli_runner.go:164] Run: docker network inspect custom-flannel-410600
	W1228 07:31:40.021226    3796 cli_runner.go:211] docker network inspect custom-flannel-410600 returned with exit code 1
	I1228 07:31:40.021226    3796 network_create.go:287] error running [docker network inspect custom-flannel-410600]: docker network inspect custom-flannel-410600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-410600 not found
	I1228 07:31:40.021226    3796 network_create.go:289] output of [docker network inspect custom-flannel-410600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-410600 not found
	
	** /stderr **
	I1228 07:31:40.024002    3796 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:31:40.089003    3796 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:31:40.105004    3796 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:31:40.118008    3796 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018b8180}
	I1228 07:31:40.118008    3796 network_create.go:124] attempt to create docker network custom-flannel-410600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1228 07:31:40.121020    3796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-410600 custom-flannel-410600
	W1228 07:31:40.175632    3796 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-410600 custom-flannel-410600 returned with exit code 1
	W1228 07:31:40.176903    3796 network_create.go:149] failed to create docker network custom-flannel-410600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-410600 custom-flannel-410600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1228 07:31:40.176944    3796 network_create.go:116] failed to create docker network custom-flannel-410600 192.168.67.0/24, will retry: subnet is taken
	I1228 07:31:40.198312    3796 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:31:40.228988    3796 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1228 07:31:40.243371    3796 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00187e270}
	I1228 07:31:40.243371    3796 network_create.go:124] attempt to create docker network custom-flannel-410600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 07:31:40.246369    3796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-410600 custom-flannel-410600
	I1228 07:31:40.388138    3796 network_create.go:108] docker network custom-flannel-410600 192.168.85.0/24 created
	I1228 07:31:40.388669    3796 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-410600" container
	I1228 07:31:40.399613    3796 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:31:40.453209    3796 cli_runner.go:164] Run: docker volume create custom-flannel-410600 --label name.minikube.sigs.k8s.io=custom-flannel-410600 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:31:40.511444    3796 oci.go:103] Successfully created a docker volume custom-flannel-410600
	I1228 07:31:40.514888    3796 cli_runner.go:164] Run: docker run --rm --name custom-flannel-410600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-410600 --entrypoint /usr/bin/test -v custom-flannel-410600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:31:41.994326    3796 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-410600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-410600 --entrypoint /usr/bin/test -v custom-flannel-410600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib: (1.4794141s)
	I1228 07:31:41.994326    3796 oci.go:107] Successfully prepared a docker volume custom-flannel-410600
	I1228 07:31:41.994326    3796 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:31:41.994326    3796 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:31:41.998487    3796 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-410600:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:31:39.323287   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:31:39.328951   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:39.393946   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:31:39.399956   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:39.430945   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:31:39.433943   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:39.464944   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:31:39.467943   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:39.497950   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:31:39.501944   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:39.535858   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:31:39.540245   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:39.587923   10604 logs.go:282] 0 containers: []
	W1228 07:31:39.587923   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:39.591924   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:39.633942   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:31:39.634945   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:39.634945   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:39.758580   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:39.758656   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:39.807462   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:31:39.807462   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:31:39.843464   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:31:39.843464   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:31:39.890368   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:31:39.890368   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:31:39.926217   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:31:39.926217   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:31:39.964093   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:31:39.964135   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:40.040011   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:40.040011   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:40.124014   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:40.124014   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:31:40.124014   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:31:40.165796   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:31:40.165856   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:31:40.246369   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:31:40.246369   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:31:40.280371   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:31:40.280371   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:31:40.313722   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:40.313722   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:42.867527   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:31:42.870080   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:31:42.874294   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:42.907111   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:31:42.910838   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:42.940442   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:31:42.944299   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:42.979748   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:31:42.982861   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:43.014765   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:31:43.018068   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:43.051671   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:31:43.055500   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:43.085425   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:31:43.090004   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:43.120005   10604 logs.go:282] 0 containers: []
	W1228 07:31:43.120005   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:43.123671   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:43.155540   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:31:43.155540   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:31:43.155620   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:31:43.192902   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:43.192949   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:43.252825   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:43.252825   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:43.364463   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:43.364463   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:43.407112   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:43.407112   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:43.493442   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:43.493442   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:31:43.493442   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:31:43.545780   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:31:43.545780   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:31:43.579900   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:31:43.579963   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:31:43.613620   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:31:43.613620   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:31:43.648342   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:31:43.648342   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:43.710041   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:31:43.710041   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:31:43.751109   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:31:43.751109   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:31:43.827621   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:31:43.827621   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:31:44.919906    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:31:44.922369    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:31:44.926031    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:44.958819    9412 logs.go:282] 2 containers: [7b42c98a4262 bfa8dc267780]
	I1228 07:31:44.962359    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:44.993387    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:31:44.996585    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:45.028717    9412 logs.go:282] 0 containers: []
	W1228 07:31:45.028717    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:31:45.032329    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:45.062631    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:31:45.066408    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:45.093999    9412 logs.go:282] 0 containers: []
	W1228 07:31:45.093999    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:31:45.097594    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:45.130660    9412 logs.go:282] 3 containers: [8169474521a1 3f64f9a54844 67014a6dfb79]
	I1228 07:31:45.134380    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:45.164600    9412 logs.go:282] 0 containers: []
	W1228 07:31:45.164600    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:45.168489    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:45.197438    9412 logs.go:282] 0 containers: []
	W1228 07:31:45.197438    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:31:45.197438    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:45.197438    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:45.230606    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:31:45.230606    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:45.278538    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:45.278599    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:45.354568    9412 logs.go:123] Gathering logs for kube-apiserver [7b42c98a4262] ...
	I1228 07:31:45.354568    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b42c98a4262"
	I1228 07:31:45.396533    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:31:45.396533    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:31:45.444903    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:31:45.444903    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:31:45.490921    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:31:45.490921    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:31:45.526816    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:31:45.526816    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:31:45.559419    9412 logs.go:123] Gathering logs for kube-controller-manager [3f64f9a54844] ...
	I1228 07:31:45.559419    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64f9a54844"
	I1228 07:31:45.592816    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:31:45.592871    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:31:45.641086    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:45.641086    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:45.678088    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:45.678088    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:45.754548    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:45.755075    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:31:45.755075    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:31:48.301862    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:31:48.304761    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:31:48.308088    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:48.339425    9412 logs.go:282] 2 containers: [7b42c98a4262 bfa8dc267780]
	I1228 07:31:48.344096    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:48.373172    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:31:48.376506    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:48.407106    9412 logs.go:282] 0 containers: []
	W1228 07:31:48.407106    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:31:48.410859    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:48.443985    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:31:48.447972    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:48.477357    9412 logs.go:282] 0 containers: []
	W1228 07:31:48.477357    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:31:48.481998    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:48.514050    9412 logs.go:282] 3 containers: [8169474521a1 3f64f9a54844 67014a6dfb79]
	I1228 07:31:48.518050    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:48.545129    9412 logs.go:282] 0 containers: []
	W1228 07:31:48.545129    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:48.548347    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:48.581001    9412 logs.go:282] 0 containers: []
	W1228 07:31:48.581001    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:31:48.581001    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:48.581001    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:48.658096    9412 logs.go:123] Gathering logs for kube-apiserver [7b42c98a4262] ...
	I1228 07:31:48.658096    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b42c98a4262"
	I1228 07:31:48.698725    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:31:48.698725    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:31:48.731287    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:31:48.731287    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:31:48.772005    9412 logs.go:123] Gathering logs for kube-controller-manager [3f64f9a54844] ...
	I1228 07:31:48.772005    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64f9a54844"
	I1228 07:31:48.810536    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:31:48.810588    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:31:48.851610    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:31:48.851610    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:48.904142    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:48.904209    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:48.942814    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:48.942814    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:49.026822    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:49.026822    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:31:49.026822    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:31:49.075440    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:31:49.075440    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:31:46.363123   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:31:46.365131   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:31:46.368134   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:46.398775   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:31:46.402606   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:46.432980   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:31:46.436504   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:46.466924   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:31:46.470874   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:46.504223   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:31:46.507560   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:46.540781   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:31:46.544661   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:46.578891   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:31:46.582625   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:46.612110   10604 logs.go:282] 0 containers: []
	W1228 07:31:46.612110   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:46.615607   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:46.646156   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:31:46.646244   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:31:46.646244   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:31:46.692635   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:31:46.692635   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:31:46.767714   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:31:46.767714   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:31:46.804903   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:31:46.804903   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:31:46.839591   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:46.839591   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:46.893812   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:46.893812   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:46.981567   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:46.981607   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:31:46.981658   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:31:47.024063   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:31:47.024135   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:31:47.059653   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:31:47.059698   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:31:47.094153   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:31:47.094214   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:31:47.128896   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:31:47.128896   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:47.189418   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:47.189511   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:47.298647   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:47.298647   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:49.120156    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:31:49.120156    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:31:49.439479    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:49.439479    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:51.977730    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:31:51.981499    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:31:51.985133    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:52.022645    9412 logs.go:282] 2 containers: [7b42c98a4262 bfa8dc267780]
	I1228 07:31:52.027814    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:52.058477    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:31:52.065833    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:52.104180    9412 logs.go:282] 0 containers: []
	W1228 07:31:52.104180    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:31:52.107811    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:52.139259    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:31:52.143294    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:52.173227    9412 logs.go:282] 0 containers: []
	W1228 07:31:52.173227    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:31:52.176887    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:52.210764    9412 logs.go:282] 3 containers: [8169474521a1 3f64f9a54844 67014a6dfb79]
	I1228 07:31:52.213850    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:52.243103    9412 logs.go:282] 0 containers: []
	W1228 07:31:52.243174    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:52.247180    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:52.281191    9412 logs.go:282] 0 containers: []
	W1228 07:31:52.281191    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:31:52.281191    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:52.281191    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:52.363945    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:52.363945    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:52.406370    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:52.406370    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:52.495520    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:52.495520    9412 logs.go:123] Gathering logs for kube-apiserver [7b42c98a4262] ...
	I1228 07:31:52.495520    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b42c98a4262"
	I1228 07:31:52.540988    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:31:52.540988    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:31:52.629013    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:31:52.629013    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:31:52.670018    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:31:52.670018    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:31:52.733287    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:31:52.733287    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:31:52.840051    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:31:52.840090    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:31:52.891885    9412 logs.go:123] Gathering logs for kube-controller-manager [3f64f9a54844] ...
	I1228 07:31:52.891885    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64f9a54844"
	W1228 07:31:52.921892    9412 logs.go:130] failed kube-controller-manager [3f64f9a54844]: command: /bin/bash -c "docker logs --tail 400 3f64f9a54844" /bin/bash -c "docker logs --tail 400 3f64f9a54844": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 3f64f9a54844
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 3f64f9a54844
	
	** /stderr **
	I1228 07:31:52.921892    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:31:52.921892    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:31:52.968888    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:52.968888    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:53.023757    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:31:53.023757    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:52.508353    3796 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-410600:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (10.509701s)
	I1228 07:31:52.508353    3796 kic.go:203] duration metric: took 10.513862s to extract preloaded images to volume ...
	I1228 07:31:52.513996    3796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:31:52.764299    3796 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:31:52.741422216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 07:31:52.769502    3796 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:31:53.021767    3796 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-410600 --name custom-flannel-410600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-410600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-410600 --network custom-flannel-410600 --ip 192.168.85.2 --volume custom-flannel-410600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:31:53.697402    3796 cli_runner.go:164] Run: docker container inspect custom-flannel-410600 --format={{.State.Running}}
	I1228 07:31:53.760061    3796 cli_runner.go:164] Run: docker container inspect custom-flannel-410600 --format={{.State.Status}}
	I1228 07:31:53.816057    3796 cli_runner.go:164] Run: docker exec custom-flannel-410600 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:31:53.927467    3796 oci.go:144] the created container "custom-flannel-410600" has a running status.
	I1228 07:31:53.927467    3796 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa...
	I1228 07:31:54.034036    3796 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:31:49.840122   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:31:49.843332   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:31:49.846768   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:49.880962   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:31:49.884842   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:49.918601   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:31:49.921590   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:49.953200   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:31:49.957204   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:49.986237   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:31:49.989914   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:50.023683   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:31:50.027676   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:50.058183   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:31:50.063837   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:50.093259   10604 logs.go:282] 0 containers: []
	W1228 07:31:50.093259   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:50.096454   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:50.131880   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:31:50.131931   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:31:50.131931   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:31:50.166791   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:31:50.166791   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:50.231434   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:50.231485   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:50.269264   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:50.269264   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:50.347704   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:50.347704   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:31:50.347704   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:31:50.387506   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:31:50.387506   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:31:50.420857   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:50.420857   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:50.476093   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:50.476093   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:50.586273   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:31:50.586273   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:31:50.625251   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:31:50.625251   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:31:50.659361   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:31:50.659361   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:31:50.733622   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:31:50.733622   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:31:50.770890   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:31:50.770890   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:31:53.304720   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:31:53.307301   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:31:53.311104   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:53.342876   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:31:53.346200   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:53.385117   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:31:53.388643   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:53.428795   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:31:53.431781   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:53.463842   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:31:53.467586   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:53.498990   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:31:53.503391   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:53.536187   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:31:53.541024   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:53.569421   10604 logs.go:282] 0 containers: []
	W1228 07:31:53.569421   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:53.573134   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:53.603975   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:31:53.604038   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:53.604062   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:53.663179   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:31:53.663179   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:31:53.711077   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:31:53.712060   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:31:53.748058   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:31:53.748058   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:31:53.788060   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:31:53.788060   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:31:53.825063   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:31:53.825063   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:31:53.858068   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:31:53.858068   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:53.926469   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:53.926469   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:54.043729   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:54.043729   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:54.172964   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:54.172964   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:31:54.172964   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:31:54.224963   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:31:54.224963   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:31:54.264958   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:31:54.264958   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:31:55.613113    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:31:54.123951    3796 cli_runner.go:164] Run: docker container inspect custom-flannel-410600 --format={{.State.Status}}
	I1228 07:31:54.182962    3796 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:31:54.182962    3796 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-410600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:31:54.324624    3796 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa...
	I1228 07:31:56.392955    3796 cli_runner.go:164] Run: docker container inspect custom-flannel-410600 --format={{.State.Status}}
	I1228 07:31:56.449152    3796 machine.go:94] provisionDockerMachine start ...
	I1228 07:31:56.454594    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:56.512543    3796 main.go:144] libmachine: Using SSH client type: native
	I1228 07:31:56.526098    3796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 56201 <nil> <nil>}
	I1228 07:31:56.526174    3796 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:31:56.700262    3796 main.go:144] libmachine: SSH cmd err, output: <nil>: custom-flannel-410600
	
	I1228 07:31:56.700325    3796 ubuntu.go:182] provisioning hostname "custom-flannel-410600"
	I1228 07:31:56.703159    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:56.759634    3796 main.go:144] libmachine: Using SSH client type: native
	I1228 07:31:56.759786    3796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 56201 <nil> <nil>}
	I1228 07:31:56.759786    3796 main.go:144] libmachine: About to run SSH command:
	sudo hostname custom-flannel-410600 && echo "custom-flannel-410600" | sudo tee /etc/hostname
	I1228 07:31:56.932414    3796 main.go:144] libmachine: SSH cmd err, output: <nil>: custom-flannel-410600
	
	I1228 07:31:56.936953    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:56.995818    3796 main.go:144] libmachine: Using SSH client type: native
	I1228 07:31:56.995818    3796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 56201 <nil> <nil>}
	I1228 07:31:56.995818    3796 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-410600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-410600/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-410600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:31:57.176330    3796 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:31:57.176330    3796 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1228 07:31:57.176330    3796 ubuntu.go:190] setting up certificates
	I1228 07:31:57.176330    3796 provision.go:84] configureAuth start
	I1228 07:31:57.179735    3796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-410600
	I1228 07:31:57.237053    3796 provision.go:143] copyHostCerts
	I1228 07:31:57.237053    3796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1228 07:31:57.237053    3796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1228 07:31:57.237053    3796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1228 07:31:57.238053    3796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1228 07:31:57.238053    3796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1228 07:31:57.238053    3796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1228 07:31:57.239045    3796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1228 07:31:57.239045    3796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1228 07:31:57.239045    3796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1228 07:31:57.240052    3796 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-flannel-410600 san=[127.0.0.1 192.168.85.2 custom-flannel-410600 localhost minikube]
	I1228 07:31:57.280049    3796 provision.go:177] copyRemoteCerts
	I1228 07:31:57.284045    3796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:31:57.287044    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:57.335050    3796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56201 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa Username:docker}
	I1228 07:31:57.454531    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1228 07:31:57.481536    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I1228 07:31:57.508517    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:31:57.537494    3796 provision.go:87] duration metric: took 361.1584ms to configureAuth
	I1228 07:31:57.537494    3796 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:31:57.537494    3796 config.go:182] Loaded profile config "custom-flannel-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:31:57.541479    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:57.596023    3796 main.go:144] libmachine: Using SSH client type: native
	I1228 07:31:57.596023    3796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 56201 <nil> <nil>}
	I1228 07:31:57.596546    3796 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1228 07:31:57.757049    3796 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1228 07:31:57.757049    3796 ubuntu.go:71] root file system type: overlay
	I1228 07:31:57.757049    3796 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1228 07:31:57.761730    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:57.815139    3796 main.go:144] libmachine: Using SSH client type: native
	I1228 07:31:57.815139    3796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 56201 <nil> <nil>}
	I1228 07:31:57.815139    3796 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1228 07:31:57.983209    3796 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1228 07:31:57.989322    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:58.046332    3796 main.go:144] libmachine: Using SSH client type: native
	I1228 07:31:58.047024    3796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil>  [] 0s} 127.0.0.1 56201 <nil> <nil>}
	I1228 07:31:58.047024    3796 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1228 07:31:54.353603   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:54.353603   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:56.913214   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:31:56.915814   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:31:56.918804   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:31:56.955972   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:31:56.960992   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:31:56.999123   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:31:57.002255   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:31:57.036753   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:31:57.042064   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:31:57.083114   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:31:57.087946   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:31:57.123952   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:31:57.126949   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:31:57.156529   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:31:57.161911   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:31:57.196764   10604 logs.go:282] 0 containers: []
	W1228 07:31:57.196764   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:31:57.201068   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:31:57.238053   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:31:57.238053   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:31:57.238053   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:31:57.341504   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:31:57.341504   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:31:57.382220   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:31:57.382220   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:31:57.470518   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:31:57.471533   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:31:57.471533   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:31:57.510520   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:31:57.510520   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:31:57.586482   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:31:57.586482   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:31:57.624881   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:31:57.624937   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:31:57.663854   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:31:57.663854   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:31:57.699611   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:31:57.699674   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:31:57.739929   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:31:57.739983   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:31:57.776148   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:31:57.776148   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:31:57.808156   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:31:57.808156   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:31:57.860139   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:31:57.860139   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:31:59.524471    3796 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-28 07:31:57.973240769 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1228 07:31:59.524471    3796 machine.go:97] duration metric: took 3.0752215s to provisionDockerMachine
	I1228 07:31:59.524564    3796 client.go:176] duration metric: took 19.6200405s to LocalClient.Create
	I1228 07:31:59.524564    3796 start.go:167] duration metric: took 19.6200405s to libmachine.API.Create "custom-flannel-410600"
	I1228 07:31:59.524564    3796 start.go:293] postStartSetup for "custom-flannel-410600" (driver="docker")
	I1228 07:31:59.524564    3796 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:31:59.528813    3796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:31:59.531488    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:59.586731    3796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56201 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa Username:docker}
	I1228 07:31:59.719833    3796 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:31:59.727412    3796 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:31:59.727412    3796 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:31:59.727412    3796 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1228 07:31:59.727412    3796 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1228 07:31:59.727412    3796 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> 135562.pem in /etc/ssl/certs
	I1228 07:31:59.734783    3796 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:31:59.747744    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem --> /etc/ssl/certs/135562.pem (1708 bytes)
	I1228 07:31:59.774745    3796 start.go:296] duration metric: took 250.1771ms for postStartSetup
	I1228 07:31:59.779745    3796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-410600
	I1228 07:31:59.828749    3796 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\config.json ...
	I1228 07:31:59.833745    3796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:31:59.836740    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:31:59.891281    3796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56201 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa Username:docker}
	I1228 07:32:00.023942    3796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:32:00.032047    3796 start.go:128] duration metric: took 20.1304977s to createHost
	I1228 07:32:00.032047    3796 start.go:83] releasing machines lock for "custom-flannel-410600", held for 20.1304977s
	I1228 07:32:00.036340    3796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-410600
	I1228 07:32:00.093223    3796 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1228 07:32:00.097070    3796 ssh_runner.go:195] Run: cat /version.json
	I1228 07:32:00.098930    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:32:00.100509    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:32:00.157433    3796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56201 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa Username:docker}
	I1228 07:32:00.173289    3796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56201 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa Username:docker}
	W1228 07:32:00.287403    3796 start.go:879] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1228 07:32:00.291581    3796 ssh_runner.go:195] Run: systemctl --version
	I1228 07:32:00.307842    3796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:32:00.316716    3796 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:32:00.321240    3796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:32:00.376444    3796 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 07:32:00.376444    3796 start.go:496] detecting cgroup driver to use...
	I1228 07:32:00.376444    3796 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:32:00.376444    3796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1228 07:32:00.386554    3796 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1228 07:32:00.386554    3796 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1228 07:32:00.404159    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:32:00.423048    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:32:00.441496    3796 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1228 07:32:00.445314    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1228 07:32:00.466074    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:32:00.485405    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:32:00.505490    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:32:00.524876    3796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:32:00.544739    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:32:00.561759    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:32:00.583256    3796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:32:00.602474    3796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:32:00.621189    3796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:32:00.637491    3796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:32:00.778455    3796 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:32:00.946979    3796 start.go:496] detecting cgroup driver to use...
	I1228 07:32:00.946979    3796 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:32:00.950972    3796 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1228 07:32:00.974971    3796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:32:00.999970    3796 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 07:32:01.058978    3796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:32:01.080993    3796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:32:01.099981    3796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:32:01.125983    3796 ssh_runner.go:195] Run: which cri-dockerd
	I1228 07:32:01.136971    3796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1228 07:32:01.149974    3796 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1228 07:32:01.173938    3796 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1228 07:32:01.320106    3796 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1228 07:32:01.466543    3796 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1228 07:32:01.466543    3796 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1228 07:32:01.493298    3796 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1228 07:32:01.514267    3796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:32:01.644519    3796 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1228 07:32:02.483750    3796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:32:02.507066    3796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1228 07:32:02.531601    3796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:32:02.554535    3796 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1228 07:32:02.696611    3796 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1228 07:32:02.834802    3796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:32:03.004248    3796 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1228 07:32:03.032716    3796 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1228 07:32:03.055944    3796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:32:03.196095    3796 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1228 07:32:03.310641    3796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:32:03.328370    3796 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1228 07:32:03.333734    3796 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1228 07:32:03.342275    3796 start.go:574] Will wait 60s for crictl version
	I1228 07:32:03.346759    3796 ssh_runner.go:195] Run: which crictl
	I1228 07:32:03.358983    3796 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:32:03.404445    3796 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1228 07:32:03.408572    3796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:32:03.454017    3796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:32:00.614204    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1228 07:32:00.618279    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:00.655473    9412 logs.go:282] 3 containers: [abfb381d03bd 7b42c98a4262 bfa8dc267780]
	I1228 07:32:00.658475    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:00.690445    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:00.693317    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:00.724464    9412 logs.go:282] 0 containers: []
	W1228 07:32:00.724464    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:00.727467    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:00.758463    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:00.762459    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:00.798801    9412 logs.go:282] 0 containers: []
	W1228 07:32:00.798801    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:00.802792    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:00.834797    9412 logs.go:282] 2 containers: [8169474521a1 67014a6dfb79]
	I1228 07:32:00.837793    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:00.870884    9412 logs.go:282] 0 containers: []
	W1228 07:32:00.870884    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:00.874933    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:00.907968    9412 logs.go:282] 0 containers: []
	W1228 07:32:00.907968    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:00.907968    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:32:00.907968    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:32:00.944973    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:00.944973    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:00.995972    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:00.995972    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:01.026970    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:01.026970    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:01.063993    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:01.063993    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:01.101989    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:01.101989    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:01.141992    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:01.141992    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:01.182942    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:01.182942    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:01.228021    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:01.228021    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:01.285105    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:01.285105    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:01.364105    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:01.364105    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1228 07:32:03.497426    3796 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1228 07:32:03.501051    3796 cli_runner.go:164] Run: docker exec -t custom-flannel-410600 dig +short host.docker.internal
	I1228 07:32:03.648548    3796 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1228 07:32:03.652709    3796 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1228 07:32:03.660079    3796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:32:03.679028    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:32:03.733880    3796 kubeadm.go:884] updating cluster {Name:custom-flannel-410600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:custom-flannel-410600 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:32:03.733880    3796 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:32:03.737603    3796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:32:03.770498    3796 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:32:03.770498    3796 docker.go:624] Images already preloaded, skipping extraction
	I1228 07:32:03.773820    3796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:32:03.802871    3796 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:32:03.802871    3796 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:32:03.802871    3796 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1228 07:32:03.802871    3796 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-410600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:custom-flannel-410600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1228 07:32:03.806415    3796 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1228 07:32:03.881576    3796 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1228 07:32:03.881576    3796 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:32:03.881576    3796 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-410600 NodeName:custom-flannel-410600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:32:03.881576    3796 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-410600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:32:03.885744    3796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:32:03.898752    3796 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:32:03.903074    3796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:32:03.916808    3796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1228 07:32:03.940611    3796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:32:03.960255    3796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2242 bytes)
	I1228 07:32:03.987179    3796 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:32:03.994923    3796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:32:04.017051    3796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:32:00.426486   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:00.430425   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:00.433609   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:00.469549   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:32:00.473471   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:00.506422   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:00.510431   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:00.545740   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:00.548746   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:00.581224   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:00.586249   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:00.618943   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:00.622843   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:00.652481   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:32:00.656474   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:00.685480   10604 logs.go:282] 0 containers: []
	W1228 07:32:00.685480   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:00.689845   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:00.725457   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:00.725457   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:00.725457   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:00.761463   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:00.761463   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:00.850807   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:00.850807   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:32:00.850807   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:32:00.890701   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:00.890701   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:00.927973   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:00.927973   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:01.011994   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:01.011994   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:01.044971   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:01.044971   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:01.076982   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:01.076982   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:01.188922   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:01.188922   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:01.237023   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:01.237023   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:01.275546   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:32:01.275615   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:32:01.309107   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:01.309107   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:01.362105   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:01.362105   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:03.927654   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:03.930059   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:03.933181   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:03.966674   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:32:03.972230   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:04.005746   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:04.009649   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:04.042662   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:04.046161   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:04.076859   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:04.079868   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:04.113616   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:04.117704   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:04.147869   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:32:04.151121   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:04.185225   10604 logs.go:282] 0 containers: []
	W1228 07:32:04.185265   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:04.188108   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:04.225850   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:04.225850   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:04.225850   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:04.263072   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:04.263220   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:04.175330    3796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:32:04.202982    3796 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600 for IP: 192.168.85.2
	I1228 07:32:04.203080    3796 certs.go:195] generating shared ca certs ...
	I1228 07:32:04.203118    3796 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:04.203144    3796 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1228 07:32:04.203890    3796 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1228 07:32:04.204103    3796 certs.go:257] generating profile certs ...
	I1228 07:32:04.204519    3796 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\client.key
	I1228 07:32:04.204546    3796 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\client.crt with IP's: []
	I1228 07:32:04.242842    3796 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\client.crt ...
	I1228 07:32:04.242842    3796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\client.crt: {Name:mk97fcd76cddc5171946821f3bfdeee57c942347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:04.243841    3796 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\client.key ...
	I1228 07:32:04.243841    3796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\client.key: {Name:mkb3be6c9439de4143feb9ddc002bea90d88e485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:04.244849    3796 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.key.6ce94ea7
	I1228 07:32:04.244849    3796 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.crt.6ce94ea7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1228 07:32:04.318413    3796 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.crt.6ce94ea7 ...
	I1228 07:32:04.318413    3796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.crt.6ce94ea7: {Name:mkc6ad496c3e0d3dce87a005ab5174dc7463a9c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:04.319414    3796 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.key.6ce94ea7 ...
	I1228 07:32:04.319414    3796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.key.6ce94ea7: {Name:mk1469871181d3ae91be0b491a9c0056be02fba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:04.320411    3796 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.crt.6ce94ea7 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.crt
	I1228 07:32:04.334412    3796 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.key.6ce94ea7 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.key
	I1228 07:32:04.335423    3796 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\proxy-client.key
	I1228 07:32:04.335423    3796 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\proxy-client.crt with IP's: []
	I1228 07:32:04.389348    3796 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\proxy-client.crt ...
	I1228 07:32:04.389348    3796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\proxy-client.crt: {Name:mke8cf52f4ab76c9635ab99e03c0072e9bf48563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:04.390344    3796 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\proxy-client.key ...
	I1228 07:32:04.390344    3796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\proxy-client.key: {Name:mk10159d2e52669672814ea29d10be9c96cf3d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:04.404341    3796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem (1338 bytes)
	W1228 07:32:04.404612    3796 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556_empty.pem, impossibly tiny 0 bytes
	I1228 07:32:04.404612    3796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1228 07:32:04.404612    3796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1228 07:32:04.404612    3796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1228 07:32:04.405183    3796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1228 07:32:04.405183    3796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem (1708 bytes)
	I1228 07:32:04.406298    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:32:04.435234    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:32:04.464798    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:32:04.493588    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:32:04.520649    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1228 07:32:04.549181    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:32:04.576744    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:32:04.605221    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-410600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:32:04.632614    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:32:04.666831    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem --> /usr/share/ca-certificates/13556.pem (1338 bytes)
	I1228 07:32:04.695390    3796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem --> /usr/share/ca-certificates/135562.pem (1708 bytes)
	I1228 07:32:04.724516    3796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:32:04.747613    3796 ssh_runner.go:195] Run: openssl version
	I1228 07:32:04.761625    3796 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135562.pem
	I1228 07:32:04.777865    3796 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135562.pem /etc/ssl/certs/135562.pem
	I1228 07:32:04.795007    3796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135562.pem
	I1228 07:32:04.802008    3796 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:37 /usr/share/ca-certificates/135562.pem
	I1228 07:32:04.806000    3796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135562.pem
	I1228 07:32:04.856370    3796 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:32:04.872684    3796 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/135562.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:32:04.893122    3796 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:32:04.912240    3796 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:32:04.936691    3796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:32:04.944246    3796 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:32:04.948245    3796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:32:04.995249    3796 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:32:05.012239    3796 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:32:05.029242    3796 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13556.pem
	I1228 07:32:05.053069    3796 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13556.pem /etc/ssl/certs/13556.pem
	I1228 07:32:05.071551    3796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13556.pem
	I1228 07:32:05.079565    3796 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:37 /usr/share/ca-certificates/13556.pem
	I1228 07:32:05.083958    3796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13556.pem
	I1228 07:32:05.133282    3796 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:32:05.153642    3796 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13556.pem /etc/ssl/certs/51391683.0
	I1228 07:32:05.169179    3796 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:32:05.175180    3796 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:32:05.175180    3796 kubeadm.go:401] StartCluster: {Name:custom-flannel-410600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:custom-flannel-410600 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:32:05.178187    3796 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1228 07:32:05.217450    3796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:32:05.237294    3796 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:32:05.252765    3796 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:32:05.256180    3796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:32:05.272141    3796 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:32:05.272168    3796 kubeadm.go:158] found existing configuration files:
	
	I1228 07:32:05.276579    3796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:32:05.288752    3796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:32:05.292756    3796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:32:05.309538    3796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:32:05.323503    3796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:32:05.326967    3796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:32:05.343866    3796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:32:05.358500    3796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:32:05.362461    3796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:32:05.381012    3796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:32:05.393112    3796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:32:05.397108    3796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:32:05.412108    3796 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:32:05.544458    3796 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1228 07:32:05.634518    3796 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:32:05.735684    3796 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:32:04.325411   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:04.326429   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:04.435234   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:04.435234   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:04.472456   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:04.472456   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:04.564740   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:04.564740   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:04.564740   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:04.599878   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:04.599932   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:04.682710   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:04.682710   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:04.718969   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:32:04.719036   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:32:04.751616   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:04.751616   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:04.812018   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:32:04.812018   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:32:04.848672   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:04.848672   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:04.892920   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:04.892989   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:07.433164   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:07.436023   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:07.441160   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:07.477588   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:32:07.481094   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:07.510884   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:07.514944   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:07.547674   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:07.551539   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:07.579998   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:07.583982   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:07.614105   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:07.618434   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:07.650546   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:32:07.654927   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:07.690191   10604 logs.go:282] 0 containers: []
	W1228 07:32:07.690191   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:07.693521   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:07.729621   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:07.729621   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:07.730743   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:07.823192   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:07.823192   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:32:07.823192   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:32:07.863351   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:07.863351   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:07.909544   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:07.909606   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:07.942609   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:07.942659   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:07.976818   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:07.976818   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:08.015292   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:08.015292   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:08.049551   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:08.049551   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:08.113924   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:08.113924   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:08.221165   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:08.221165   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:08.262455   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:08.262455   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:08.342925   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:32:08.342925   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:32:08.379089   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:08.379611   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:11.446356    9412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.0820925s)
	W1228 07:32:11.446881    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1228 07:32:11.446881    9412 logs.go:123] Gathering logs for kube-apiserver [7b42c98a4262] ...
	I1228 07:32:11.446881    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b42c98a4262"
	I1228 07:32:11.487238    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:11.487238    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:14.049979    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:14.052600    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:14.056969    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:14.087979    9412 logs.go:282] 3 containers: [abfb381d03bd 7b42c98a4262 bfa8dc267780]
	I1228 07:32:14.092000    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:10.937007   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:10.939856   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:10.946274   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:10.979649   10604 logs.go:282] 1 containers: [98141142a325]
	I1228 07:32:10.983115   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:11.023997   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:11.031326   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:11.062927   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:11.067781   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:11.097374   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:11.101962   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:11.136591   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:11.139946   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:11.168407   10604 logs.go:282] 1 containers: [0c08327d6047]
	I1228 07:32:11.172877   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:11.201158   10604 logs.go:282] 0 containers: []
	W1228 07:32:11.201158   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:11.205981   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:11.242873   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:11.242954   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:11.242954   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:11.354563   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:11.354563   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:11.393602   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:11.393602   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:11.442309   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:11.442309   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:11.479690   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:11.479738   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:11.523468   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:11.523520   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:11.574503   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:11.574503   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:11.658514   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:11.658514   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:11.755521   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:11.755521   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:32:11.755521   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:32:11.798082   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:11.798142   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:11.836522   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:11.836620   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:11.909824   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:32:11.909824   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:32:11.947526   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:11.947526   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:15.698339    3796 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:32:15.698339    3796 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:32:15.698339    3796 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:32:15.699379    3796 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1228 07:32:15.699468    3796 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1228 07:32:15.699603    3796 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1228 07:32:15.699787    3796 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1228 07:32:15.699869    3796 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1228 07:32:15.700017    3796 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1228 07:32:15.700139    3796 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1228 07:32:15.700236    3796 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1228 07:32:15.700311    3796 kubeadm.go:319] CONFIG_INET: enabled
	I1228 07:32:15.700407    3796 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1228 07:32:15.700592    3796 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1228 07:32:15.700836    3796 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1228 07:32:15.700998    3796 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1228 07:32:15.701258    3796 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1228 07:32:15.701348    3796 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1228 07:32:15.701348    3796 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1228 07:32:15.701348    3796 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1228 07:32:15.701348    3796 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1228 07:32:15.701885    3796 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1228 07:32:15.702015    3796 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1228 07:32:15.702114    3796 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1228 07:32:15.702265    3796 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1228 07:32:15.702372    3796 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1228 07:32:15.702479    3796 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1228 07:32:15.702533    3796 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1228 07:32:15.702683    3796 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1228 07:32:15.702730    3796 kubeadm.go:319] OS: Linux
	I1228 07:32:15.702810    3796 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:32:15.702810    3796 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:32:15.702810    3796 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:32:15.702810    3796 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:32:15.702810    3796 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:32:15.702810    3796 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:32:15.702810    3796 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:32:15.703342    3796 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:32:15.703481    3796 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:32:15.703481    3796 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:32:15.703481    3796 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:32:15.703481    3796 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:32:15.704042    3796 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:32:15.706627    3796 out.go:252]   - Generating certificates and keys ...
	I1228 07:32:15.706627    3796 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:32:15.707167    3796 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:32:15.707330    3796 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:32:15.707330    3796 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:32:15.707330    3796 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:32:15.707330    3796 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:32:15.707865    3796 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:32:15.708050    3796 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-410600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:32:15.708050    3796 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:32:15.708646    3796 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-410600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:32:15.708774    3796 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:32:15.708774    3796 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:32:15.708774    3796 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:32:15.708774    3796 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:32:15.708774    3796 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:32:15.709553    3796 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:32:15.709612    3796 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:32:15.709841    3796 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:32:15.709958    3796 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:32:15.709958    3796 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:32:15.709958    3796 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:32:15.713653    3796 out.go:252]   - Booting up control plane ...
	I1228 07:32:15.713653    3796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:32:15.713653    3796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:32:15.713653    3796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:32:15.713653    3796 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:32:15.713653    3796 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:32:15.714652    3796 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:32:15.714652    3796 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:32:15.714652    3796 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:32:15.715271    3796 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:32:15.715428    3796 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:32:15.715428    3796 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.346244ms
	I1228 07:32:15.715693    3796 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 07:32:15.715693    3796 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1228 07:32:15.715693    3796 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 07:32:15.716224    3796 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 07:32:15.716351    3796 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.506320682s
	I1228 07:32:15.716509    3796 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.759538679s
	I1228 07:32:15.716509    3796 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502219563s
	I1228 07:32:15.716712    3796 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 07:32:15.716712    3796 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 07:32:15.716712    3796 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 07:32:15.717468    3796 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-410600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 07:32:15.717749    3796 kubeadm.go:319] [bootstrap-token] Using token: dvx5fw.syd7c07deyeqk751
	I1228 07:32:15.720730    3796 out.go:252]   - Configuring RBAC rules ...
	I1228 07:32:15.720730    3796 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 07:32:15.721084    3796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 07:32:15.721217    3796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 07:32:15.721510    3796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 07:32:15.721733    3796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 07:32:15.721966    3796 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 07:32:15.722120    3796 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 07:32:15.722120    3796 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 07:32:15.722404    3796 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 07:32:15.722404    3796 kubeadm.go:319] 
	I1228 07:32:15.722511    3796 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 07:32:15.722570    3796 kubeadm.go:319] 
	I1228 07:32:15.722720    3796 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 07:32:15.722720    3796 kubeadm.go:319] 
	I1228 07:32:15.722720    3796 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 07:32:15.722856    3796 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 07:32:15.723013    3796 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 07:32:15.723013    3796 kubeadm.go:319] 
	I1228 07:32:15.723126    3796 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 07:32:15.723169    3796 kubeadm.go:319] 
	I1228 07:32:15.723258    3796 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 07:32:15.723258    3796 kubeadm.go:319] 
	I1228 07:32:15.723414    3796 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 07:32:15.723558    3796 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 07:32:15.723558    3796 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 07:32:15.723558    3796 kubeadm.go:319] 
	I1228 07:32:15.723558    3796 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 07:32:15.724116    3796 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 07:32:15.724116    3796 kubeadm.go:319] 
	I1228 07:32:15.724295    3796 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dvx5fw.syd7c07deyeqk751 \
	I1228 07:32:15.724502    3796 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fea1b033220c76616a69daefe9de210d60574273e9df21e09282f95b8582ae4 \
	I1228 07:32:15.724502    3796 kubeadm.go:319] 	--control-plane 
	I1228 07:32:15.724502    3796 kubeadm.go:319] 
	I1228 07:32:15.724782    3796 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 07:32:15.724782    3796 kubeadm.go:319] 
	I1228 07:32:15.724782    3796 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dvx5fw.syd7c07deyeqk751 \
	I1228 07:32:15.724782    3796 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fea1b033220c76616a69daefe9de210d60574273e9df21e09282f95b8582ae4 
	I1228 07:32:15.724782    3796 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1228 07:32:15.727405    3796 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1228 07:32:14.127210    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:14.131471    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:14.159443    9412 logs.go:282] 0 containers: []
	W1228 07:32:14.159527    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:14.163907    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:14.195990    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:14.200264    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:14.228416    9412 logs.go:282] 0 containers: []
	W1228 07:32:14.228416    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:14.232597    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:14.262630    9412 logs.go:282] 3 containers: [32d8cc1c272d 8169474521a1 67014a6dfb79]
	I1228 07:32:14.265969    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:14.296189    9412 logs.go:282] 0 containers: []
	W1228 07:32:14.296189    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:14.301107    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:14.330681    9412 logs.go:282] 0 containers: []
	W1228 07:32:14.330681    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:14.330681    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:14.330681    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:14.377682    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:14.377682    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:14.415821    9412 logs.go:123] Gathering logs for kube-apiserver [7b42c98a4262] ...
	I1228 07:32:14.415821    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b42c98a4262"
	W1228 07:32:14.447818    9412 logs.go:130] failed kube-apiserver [7b42c98a4262]: command: /bin/bash -c "docker logs --tail 400 7b42c98a4262" /bin/bash -c "docker logs --tail 400 7b42c98a4262": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 7b42c98a4262
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 7b42c98a4262
	
	** /stderr **
	I1228 07:32:14.447818    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:14.447818    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:14.495081    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:14.495081    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:14.536330    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:14.536330    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:14.571040    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:14.571040    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:14.606336    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:14.606336    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:14.663039    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:14.663039    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:14.755501    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:14.755501    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:14.800018    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:14.800018    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:14.883303    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:14.883303    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:14.883303    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:14.923749    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:14.923810    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:14.965029    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:32:14.965029    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:32:17.502192    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:17.505250    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:17.509071    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:17.539660    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:17.543532    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:17.571691    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:17.575050    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:17.607137    9412 logs.go:282] 0 containers: []
	W1228 07:32:17.607210    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:17.610975    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:17.642142    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:17.645975    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:17.673793    9412 logs.go:282] 0 containers: []
	W1228 07:32:17.673793    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:17.679227    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:17.708444    9412 logs.go:282] 3 containers: [32d8cc1c272d 8169474521a1 67014a6dfb79]
	I1228 07:32:17.712811    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:17.740936    9412 logs.go:282] 0 containers: []
	W1228 07:32:17.740988    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:17.745264    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:17.779546    9412 logs.go:282] 0 containers: []
	W1228 07:32:17.779602    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:17.779639    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:17.779639    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:17.871722    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:17.871722    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:17.915133    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:17.915133    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:18.000069    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:18.000069    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:18.000162    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:18.040622    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:18.040670    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:18.074773    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:18.074773    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:18.108630    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:32:18.108630    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:32:18.141771    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:18.141771    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:18.184905    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:18.185425    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:18.229629    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:18.229629    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:18.272478    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:18.272478    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:18.315584    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:18.315584    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:18.347118    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:18.347118    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:15.750593    3796 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 07:32:15.754954    3796 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1228 07:32:15.764803    3796 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1228 07:32:15.764916    3796 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1228 07:32:15.797654    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 07:32:16.243666    3796 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 07:32:16.248672    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-410600 minikube.k8s.io/updated_at=2025_12_28T07_32_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=custom-flannel-410600 minikube.k8s.io/primary=true
	I1228 07:32:16.248672    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:16.257675    3796 ops.go:34] apiserver oom_adj: -16
	I1228 07:32:16.407399    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:16.908855    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:17.409657    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:17.907949    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:18.406756    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:18.909456    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:14.507090   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:19.407597    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:19.908034    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:20.407209    3796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:32:20.716147    3796 kubeadm.go:1114] duration metric: took 4.4724109s to wait for elevateKubeSystemPrivileges
	I1228 07:32:20.716191    3796 kubeadm.go:403] duration metric: took 15.540767s to StartCluster
	I1228 07:32:20.716256    3796 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:20.716403    3796 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 07:32:20.719678    3796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:32:20.720866    3796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 07:32:20.720866    3796 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:32:20.720866    3796 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:32:20.721393    3796 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-410600"
	I1228 07:32:20.721393    3796 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-410600"
	I1228 07:32:20.721517    3796 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-410600"
	I1228 07:32:20.721517    3796 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-410600"
	I1228 07:32:20.721570    3796 host.go:66] Checking if "custom-flannel-410600" exists ...
	I1228 07:32:20.721570    3796 config.go:182] Loaded profile config "custom-flannel-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:32:20.723957    3796 out.go:179] * Verifying Kubernetes components...
	I1228 07:32:20.734931    3796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:32:20.734931    3796 cli_runner.go:164] Run: docker container inspect custom-flannel-410600 --format={{.State.Status}}
	I1228 07:32:20.734931    3796 cli_runner.go:164] Run: docker container inspect custom-flannel-410600 --format={{.State.Status}}
	I1228 07:32:20.793306    3796 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-410600"
	I1228 07:32:20.793306    3796 host.go:66] Checking if "custom-flannel-410600" exists ...
	I1228 07:32:20.803926    3796 cli_runner.go:164] Run: docker container inspect custom-flannel-410600 --format={{.State.Status}}
	I1228 07:32:20.817105    3796 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:32:20.822698    3796 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:32:20.822698    3796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:32:20.829314    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:32:20.866810    3796 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:32:20.866810    3796 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:32:20.871032    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:32:20.885100    3796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56201 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa Username:docker}
	I1228 07:32:20.924121    3796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56201 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-410600\id_rsa Username:docker}
	I1228 07:32:21.112023    3796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 07:32:21.218776    3796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:32:21.232146    3796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:32:21.327752    3796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:32:21.907141    3796 start.go:987] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1228 07:32:21.911137    3796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-410600
	I1228 07:32:21.964131    3796 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-410600" to be "Ready" ...
	I1228 07:32:22.415905    3796 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-410600" context rescaled to 1 replicas
	I1228 07:32:22.447842    3796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2156768s)
	I1228 07:32:22.447842    3796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.120072s)
	I1228 07:32:22.472991    3796 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 07:32:20.903108    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:20.907125    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:20.912106    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:20.946108    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:20.950104    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:20.979212    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:20.983226    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:21.011879    9412 logs.go:282] 0 containers: []
	W1228 07:32:21.011969    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:21.015629    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:21.053069    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:21.056687    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:21.086291    9412 logs.go:282] 0 containers: []
	W1228 07:32:21.086291    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:21.091306    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:21.125447    9412 logs.go:282] 3 containers: [32d8cc1c272d 8169474521a1 67014a6dfb79]
	I1228 07:32:21.129125    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:21.169219    9412 logs.go:282] 0 containers: []
	W1228 07:32:21.169276    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:21.173014    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:21.202588    9412 logs.go:282] 0 containers: []
	W1228 07:32:21.202588    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:21.202588    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:21.202588    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:21.251584    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:21.251584    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:21.302193    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:21.302193    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:21.353657    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:21.353657    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:21.391197    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:32:21.391197    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:32:21.433500    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:21.433550    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:21.468876    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:21.468876    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:21.575095    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:21.575095    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:21.619532    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:21.619532    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:21.667031    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:21.667164    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:21.716372    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:21.716947    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:21.774375    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:21.774375    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:21.814634    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:21.814634    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:21.895140    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:22.478314    3796 addons.go:530] duration metric: took 1.7574209s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1228 07:32:23.969421    3796 node_ready.go:57] node "custom-flannel-410600" has "Ready":"False" status (will retry)
	I1228 07:32:19.508071   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1228 07:32:19.512389   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:19.544666   10604 logs.go:282] 2 containers: [293c56278bf9 98141142a325]
	I1228 07:32:19.549120   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:19.582620   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:19.585897   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:19.621844   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:19.625810   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:19.657871   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:19.661317   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:19.691827   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:19.697474   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:19.728561   10604 logs.go:282] 2 containers: [43bc05da5b6a 0c08327d6047]
	I1228 07:32:19.731635   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:19.760247   10604 logs.go:282] 0 containers: []
	W1228 07:32:19.760332   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:19.765197   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:19.795557   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:19.795557   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:19.795557   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:19.830576   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:19.830596   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:19.866550   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:19.866632   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:19.985819   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:19.985819   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:32:20.025002   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:20.025002   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:20.058815   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:20.058815   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:20.117377   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:20.117377   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:20.180907   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:32:20.180907   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	I1228 07:32:20.223785   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:20.223785   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:20.256395   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:20.256395   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1228 07:32:24.395259    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:24.398304    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:24.401665    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:24.435489    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:24.439592    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:24.479673    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:24.484051    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:24.521065    9412 logs.go:282] 0 containers: []
	W1228 07:32:24.521065    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:24.526287    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:24.559302    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:24.563304    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:24.591571    9412 logs.go:282] 0 containers: []
	W1228 07:32:24.591571    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:24.599826    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:24.635718    9412 logs.go:282] 3 containers: [32d8cc1c272d 8169474521a1 67014a6dfb79]
	I1228 07:32:24.640396    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:24.672842    9412 logs.go:282] 0 containers: []
	W1228 07:32:24.672918    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:24.676535    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:24.705044    9412 logs.go:282] 0 containers: []
	W1228 07:32:24.705044    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:24.705044    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:24.705044    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:24.750192    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:24.750192    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:24.793550    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:24.793550    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:24.842926    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:24.842986    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:24.879208    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:24.879208    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:24.918965    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:24.918965    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:24.963614    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:24.963614    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:24.997995    9412 logs.go:123] Gathering logs for kube-controller-manager [8169474521a1] ...
	I1228 07:32:24.997995    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8169474521a1"
	I1228 07:32:25.032638    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:25.032709    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:25.065747    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:25.065747    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:25.158820    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:25.158820    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:25.245740    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:25.245740    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:25.245740    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:25.295879    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:25.295972    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:27.835011    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:27.838020    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:27.841007    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:27.878004    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:27.881004    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:27.918024    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:27.923007    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:27.960005    9412 logs.go:282] 0 containers: []
	W1228 07:32:27.960005    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:27.969008    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:28.003004    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:28.008019    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:28.044017    9412 logs.go:282] 0 containers: []
	W1228 07:32:28.044017    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:28.048009    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:28.087044    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:28.092010    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:28.126018    9412 logs.go:282] 0 containers: []
	W1228 07:32:28.126018    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:28.130009    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:28.165015    9412 logs.go:282] 0 containers: []
	W1228 07:32:28.165015    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:28.165015    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:28.165015    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:28.197040    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:28.197040    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:28.253258    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:28.253258    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:28.349402    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:28.349402    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:28.392263    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:28.392263    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:28.480001    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:28.480001    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:28.480001    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:28.519151    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:28.519151    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:28.570364    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:28.570364    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:28.606707    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:28.606788    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:28.653859    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:28.653859    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:28.689461    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:28.689514    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:28.735200    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:28.735200    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	W1228 07:32:26.470087    3796 node_ready.go:57] node "custom-flannel-410600" has "Ready":"False" status (will retry)
	W1228 07:32:28.471195    3796 node_ready.go:57] node "custom-flannel-410600" has "Ready":"False" status (will retry)
	I1228 07:32:31.281188    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:31.284876    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:31.288748    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:31.323161    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:31.326713    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:31.357098    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:31.360977    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:31.389132    9412 logs.go:282] 0 containers: []
	W1228 07:32:31.389194    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:31.392725    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:31.428197    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:31.432157    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:31.465443    9412 logs.go:282] 0 containers: []
	W1228 07:32:31.465443    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:31.470110    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:31.507659    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:31.511659    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:31.543982    9412 logs.go:282] 0 containers: []
	W1228 07:32:31.543982    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:31.548646    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:31.582296    9412 logs.go:282] 0 containers: []
	W1228 07:32:31.582296    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:31.582296    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:31.582296    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:31.627900    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:31.627900    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:31.663963    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:31.663963    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:31.701305    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:31.701305    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:31.786867    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:31.786867    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:31.786867    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:31.832040    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:31.832040    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:31.877597    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:31.877597    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:31.931568    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:31.931568    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:32.029859    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:32.029859    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:32.075834    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:32.075834    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:32.112516    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:32.112516    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:32.157007    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:32.157007    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	W1228 07:32:30.969987    3796 node_ready.go:57] node "custom-flannel-410600" has "Ready":"False" status (will retry)
	W1228 07:32:32.970243    3796 node_ready.go:57] node "custom-flannel-410600" has "Ready":"False" status (will retry)
	I1228 07:32:33.470102    3796 node_ready.go:49] node "custom-flannel-410600" is "Ready"
	I1228 07:32:33.470102    3796 node_ready.go:38] duration metric: took 11.5057903s for node "custom-flannel-410600" to be "Ready" ...
	I1228 07:32:33.470102    3796 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:32:33.473099    3796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:32:33.494121    3796 api_server.go:72] duration metric: took 12.7730549s to wait for apiserver process to appear ...
	I1228 07:32:33.494121    3796 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:32:33.494121    3796 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:56200/healthz ...
	I1228 07:32:33.504116    3796 api_server.go:325] https://127.0.0.1:56200/healthz returned 200:
	ok
	I1228 07:32:33.507113    3796 api_server.go:141] control plane version: v1.35.0
	I1228 07:32:33.507113    3796 api_server.go:131] duration metric: took 12.991ms to wait for apiserver health ...
	I1228 07:32:33.507113    3796 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:32:33.513116    3796 system_pods.go:59] 7 kube-system pods found
	I1228 07:32:33.513116    3796 system_pods.go:61] "coredns-7d764666f9-87t9m" [ac716350-376c-48f4-a48b-1efe2066879c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:32:33.513116    3796 system_pods.go:61] "etcd-custom-flannel-410600" [4e44743f-2dac-4021-b1b9-d311eb606d18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:32:33.513116    3796 system_pods.go:61] "kube-apiserver-custom-flannel-410600" [788cdfeb-b321-4fda-bbf7-54dd974ebc2b] Running
	I1228 07:32:33.513116    3796 system_pods.go:61] "kube-controller-manager-custom-flannel-410600" [1b8afce1-78c5-48e6-90df-39fa8be53080] Running
	I1228 07:32:33.513116    3796 system_pods.go:61] "kube-proxy-gvvhc" [340f9bd0-647e-4bd7-adf6-02870c58f757] Running
	I1228 07:32:33.513116    3796 system_pods.go:61] "kube-scheduler-custom-flannel-410600" [51b9f441-806f-47b7-a861-0d00dd0b67ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:32:33.513116    3796 system_pods.go:61] "storage-provisioner" [808967a2-42a8-4d60-9a2f-a281ee2f6ba3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:32:33.513116    3796 system_pods.go:74] duration metric: took 6.0036ms to wait for pod list to return data ...
	I1228 07:32:33.513116    3796 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:32:33.519112    3796 default_sa.go:45] found service account: "default"
	I1228 07:32:33.519112    3796 default_sa.go:55] duration metric: took 5.9958ms for default service account to be created ...
	I1228 07:32:33.519112    3796 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:32:33.524128    3796 system_pods.go:86] 7 kube-system pods found
	I1228 07:32:33.524128    3796 system_pods.go:89] "coredns-7d764666f9-87t9m" [ac716350-376c-48f4-a48b-1efe2066879c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:32:33.524128    3796 system_pods.go:89] "etcd-custom-flannel-410600" [4e44743f-2dac-4021-b1b9-d311eb606d18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:32:33.524128    3796 system_pods.go:89] "kube-apiserver-custom-flannel-410600" [788cdfeb-b321-4fda-bbf7-54dd974ebc2b] Running
	I1228 07:32:33.524128    3796 system_pods.go:89] "kube-controller-manager-custom-flannel-410600" [1b8afce1-78c5-48e6-90df-39fa8be53080] Running
	I1228 07:32:33.524128    3796 system_pods.go:89] "kube-proxy-gvvhc" [340f9bd0-647e-4bd7-adf6-02870c58f757] Running
	I1228 07:32:33.524128    3796 system_pods.go:89] "kube-scheduler-custom-flannel-410600" [51b9f441-806f-47b7-a861-0d00dd0b67ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:32:33.524128    3796 system_pods.go:89] "storage-provisioner" [808967a2-42a8-4d60-9a2f-a281ee2f6ba3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:32:33.524128    3796 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1228 07:32:33.807281    3796 system_pods.go:86] 7 kube-system pods found
	I1228 07:32:33.808257    3796 system_pods.go:89] "coredns-7d764666f9-87t9m" [ac716350-376c-48f4-a48b-1efe2066879c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:32:33.808257    3796 system_pods.go:89] "etcd-custom-flannel-410600" [4e44743f-2dac-4021-b1b9-d311eb606d18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:32:33.808257    3796 system_pods.go:89] "kube-apiserver-custom-flannel-410600" [788cdfeb-b321-4fda-bbf7-54dd974ebc2b] Running
	I1228 07:32:33.808257    3796 system_pods.go:89] "kube-controller-manager-custom-flannel-410600" [1b8afce1-78c5-48e6-90df-39fa8be53080] Running
	I1228 07:32:33.808257    3796 system_pods.go:89] "kube-proxy-gvvhc" [340f9bd0-647e-4bd7-adf6-02870c58f757] Running
	I1228 07:32:33.808368    3796 system_pods.go:89] "kube-scheduler-custom-flannel-410600" [51b9f441-806f-47b7-a861-0d00dd0b67ae] Running
	I1228 07:32:33.808368    3796 system_pods.go:89] "storage-provisioner" [808967a2-42a8-4d60-9a2f-a281ee2f6ba3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:32:30.343818   10604 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.0872645s)
	W1228 07:32:30.343818   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1228 07:32:30.343818   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:30.343818   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:30.387996   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:30.387996   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:30.467266   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:30.467266   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:30.501977   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:32:30.501977   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:32:30.537773   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:30.537773   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:33.079184   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:33.082197   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:33.085882   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:33.118957   10604 logs.go:282] 2 containers: [293c56278bf9 98141142a325]
	I1228 07:32:33.122363   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:33.155491   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:33.159342   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:33.189136   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:33.192914   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:33.224133   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:33.227914   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:33.258927   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:33.262576   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:33.293760   10604 logs.go:282] 2 containers: [43bc05da5b6a 0c08327d6047]
	I1228 07:32:33.297224   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:33.326812   10604 logs.go:282] 0 containers: []
	W1228 07:32:33.326812   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:33.330042   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:33.370084   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:33.370652   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:33.370652   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:33.406421   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:33.406421   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:33.494121   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:33.494121   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:33.529113   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:33.529113   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:33.560832   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:33.560883   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:33.677625   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:33.677625   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:33.716114   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:33.716114   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:33.800310   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:33.800310   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:33.800310   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:32:33.840280   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:33.840280   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:33.884177   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:33.884177   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:33.917960   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:33.918482   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:33.978790   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:33.978790   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:34.047751   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:34.047751   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:34.082758   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:32:34.082758   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:32:34.118155   10604 logs.go:123] Gathering logs for kube-apiserver [98141142a325] ...
	I1228 07:32:34.118155   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98141142a325"
	W1228 07:32:34.150559   10604 logs.go:130] failed kube-apiserver [98141142a325]: command: /bin/bash -c "docker logs --tail 400 98141142a325" /bin/bash -c "docker logs --tail 400 98141142a325": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 98141142a325
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 98141142a325
	
	** /stderr **
	I1228 07:32:34.126162    3796 system_pods.go:86] 7 kube-system pods found
	I1228 07:32:34.126162    3796 system_pods.go:89] "coredns-7d764666f9-87t9m" [ac716350-376c-48f4-a48b-1efe2066879c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:32:34.126162    3796 system_pods.go:89] "etcd-custom-flannel-410600" [4e44743f-2dac-4021-b1b9-d311eb606d18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:32:34.126162    3796 system_pods.go:89] "kube-apiserver-custom-flannel-410600" [788cdfeb-b321-4fda-bbf7-54dd974ebc2b] Running
	I1228 07:32:34.126162    3796 system_pods.go:89] "kube-controller-manager-custom-flannel-410600" [1b8afce1-78c5-48e6-90df-39fa8be53080] Running
	I1228 07:32:34.126162    3796 system_pods.go:89] "kube-proxy-gvvhc" [340f9bd0-647e-4bd7-adf6-02870c58f757] Running
	I1228 07:32:34.126162    3796 system_pods.go:89] "kube-scheduler-custom-flannel-410600" [51b9f441-806f-47b7-a861-0d00dd0b67ae] Running
	I1228 07:32:34.126162    3796 system_pods.go:89] "storage-provisioner" [808967a2-42a8-4d60-9a2f-a281ee2f6ba3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:32:34.428994    3796 system_pods.go:86] 7 kube-system pods found
	I1228 07:32:34.428994    3796 system_pods.go:89] "coredns-7d764666f9-87t9m" [ac716350-376c-48f4-a48b-1efe2066879c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:32:34.428994    3796 system_pods.go:89] "etcd-custom-flannel-410600" [4e44743f-2dac-4021-b1b9-d311eb606d18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:32:34.428994    3796 system_pods.go:89] "kube-apiserver-custom-flannel-410600" [788cdfeb-b321-4fda-bbf7-54dd974ebc2b] Running
	I1228 07:32:34.428994    3796 system_pods.go:89] "kube-controller-manager-custom-flannel-410600" [1b8afce1-78c5-48e6-90df-39fa8be53080] Running
	I1228 07:32:34.428994    3796 system_pods.go:89] "kube-proxy-gvvhc" [340f9bd0-647e-4bd7-adf6-02870c58f757] Running
	I1228 07:32:34.428994    3796 system_pods.go:89] "kube-scheduler-custom-flannel-410600" [51b9f441-806f-47b7-a861-0d00dd0b67ae] Running
	I1228 07:32:34.428994    3796 system_pods.go:89] "storage-provisioner" [808967a2-42a8-4d60-9a2f-a281ee2f6ba3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:32:35.028099    3796 system_pods.go:86] 7 kube-system pods found
	I1228 07:32:35.028099    3796 system_pods.go:89] "coredns-7d764666f9-87t9m" [ac716350-376c-48f4-a48b-1efe2066879c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:32:35.028099    3796 system_pods.go:89] "etcd-custom-flannel-410600" [4e44743f-2dac-4021-b1b9-d311eb606d18] Running
	I1228 07:32:35.028099    3796 system_pods.go:89] "kube-apiserver-custom-flannel-410600" [788cdfeb-b321-4fda-bbf7-54dd974ebc2b] Running
	I1228 07:32:35.028099    3796 system_pods.go:89] "kube-controller-manager-custom-flannel-410600" [1b8afce1-78c5-48e6-90df-39fa8be53080] Running
	I1228 07:32:35.028099    3796 system_pods.go:89] "kube-proxy-gvvhc" [340f9bd0-647e-4bd7-adf6-02870c58f757] Running
	I1228 07:32:35.028099    3796 system_pods.go:89] "kube-scheduler-custom-flannel-410600" [51b9f441-806f-47b7-a861-0d00dd0b67ae] Running
	I1228 07:32:35.028099    3796 system_pods.go:89] "storage-provisioner" [808967a2-42a8-4d60-9a2f-a281ee2f6ba3] Running
	I1228 07:32:35.672205    3796 system_pods.go:86] 7 kube-system pods found
	I1228 07:32:35.672205    3796 system_pods.go:89] "coredns-7d764666f9-87t9m" [ac716350-376c-48f4-a48b-1efe2066879c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:32:35.673088    3796 system_pods.go:89] "etcd-custom-flannel-410600" [4e44743f-2dac-4021-b1b9-d311eb606d18] Running
	I1228 07:32:35.673088    3796 system_pods.go:89] "kube-apiserver-custom-flannel-410600" [788cdfeb-b321-4fda-bbf7-54dd974ebc2b] Running
	I1228 07:32:35.673088    3796 system_pods.go:89] "kube-controller-manager-custom-flannel-410600" [1b8afce1-78c5-48e6-90df-39fa8be53080] Running
	I1228 07:32:35.673088    3796 system_pods.go:89] "kube-proxy-gvvhc" [340f9bd0-647e-4bd7-adf6-02870c58f757] Running
	I1228 07:32:35.673088    3796 system_pods.go:89] "kube-scheduler-custom-flannel-410600" [51b9f441-806f-47b7-a861-0d00dd0b67ae] Running
	I1228 07:32:35.673088    3796 system_pods.go:89] "storage-provisioner" [808967a2-42a8-4d60-9a2f-a281ee2f6ba3] Running
	I1228 07:32:36.617333    3796 system_pods.go:86] 7 kube-system pods found
	I1228 07:32:36.617333    3796 system_pods.go:89] "coredns-7d764666f9-87t9m" [ac716350-376c-48f4-a48b-1efe2066879c] Running
	I1228 07:32:36.617333    3796 system_pods.go:89] "etcd-custom-flannel-410600" [4e44743f-2dac-4021-b1b9-d311eb606d18] Running
	I1228 07:32:36.617333    3796 system_pods.go:89] "kube-apiserver-custom-flannel-410600" [788cdfeb-b321-4fda-bbf7-54dd974ebc2b] Running
	I1228 07:32:36.617333    3796 system_pods.go:89] "kube-controller-manager-custom-flannel-410600" [1b8afce1-78c5-48e6-90df-39fa8be53080] Running
	I1228 07:32:36.617333    3796 system_pods.go:89] "kube-proxy-gvvhc" [340f9bd0-647e-4bd7-adf6-02870c58f757] Running
	I1228 07:32:36.617333    3796 system_pods.go:89] "kube-scheduler-custom-flannel-410600" [51b9f441-806f-47b7-a861-0d00dd0b67ae] Running
	I1228 07:32:36.617333    3796 system_pods.go:89] "storage-provisioner" [808967a2-42a8-4d60-9a2f-a281ee2f6ba3] Running
	I1228 07:32:36.617333    3796 system_pods.go:126] duration metric: took 3.0981718s to wait for k8s-apps to be running ...
	I1228 07:32:36.617333    3796 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:32:36.621527    3796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:32:36.640768    3796 system_svc.go:56] duration metric: took 23.4347ms WaitForService to wait for kubelet
	I1228 07:32:36.640836    3796 kubeadm.go:587] duration metric: took 15.9196891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:32:36.640836    3796 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:32:36.646223    3796 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1228 07:32:36.646267    3796 node_conditions.go:123] node cpu capacity is 16
	I1228 07:32:36.646302    3796 node_conditions.go:105] duration metric: took 5.4315ms to run NodePressure ...
	I1228 07:32:36.646334    3796 start.go:242] waiting for startup goroutines ...
	I1228 07:32:36.646363    3796 start.go:247] waiting for cluster config update ...
	I1228 07:32:36.646363    3796 start.go:256] writing updated cluster config ...
	I1228 07:32:36.651370    3796 ssh_runner.go:195] Run: rm -f paused
	I1228 07:32:36.658950    3796 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:32:36.664772    3796 pod_ready.go:83] waiting for pod "coredns-7d764666f9-87t9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:36.673166    3796 pod_ready.go:94] pod "coredns-7d764666f9-87t9m" is "Ready"
	I1228 07:32:36.673166    3796 pod_ready.go:86] duration metric: took 8.3936ms for pod "coredns-7d764666f9-87t9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:36.677799    3796 pod_ready.go:83] waiting for pod "etcd-custom-flannel-410600" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:36.689767    3796 pod_ready.go:94] pod "etcd-custom-flannel-410600" is "Ready"
	I1228 07:32:36.689767    3796 pod_ready.go:86] duration metric: took 11.9384ms for pod "etcd-custom-flannel-410600" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:36.694648    3796 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-410600" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:36.703888    3796 pod_ready.go:94] pod "kube-apiserver-custom-flannel-410600" is "Ready"
	I1228 07:32:36.703888    3796 pod_ready.go:86] duration metric: took 9.2037ms for pod "kube-apiserver-custom-flannel-410600" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:36.708210    3796 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-410600" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:37.065812    3796 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-410600" is "Ready"
	I1228 07:32:37.065812    3796 pod_ready.go:86] duration metric: took 357.5968ms for pod "kube-controller-manager-custom-flannel-410600" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:37.265676    3796 pod_ready.go:83] waiting for pod "kube-proxy-gvvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:37.666817    3796 pod_ready.go:94] pod "kube-proxy-gvvhc" is "Ready"
	I1228 07:32:37.666817    3796 pod_ready.go:86] duration metric: took 401.135ms for pod "kube-proxy-gvvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:37.866493    3796 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-410600" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:38.267750    3796 pod_ready.go:94] pod "kube-scheduler-custom-flannel-410600" is "Ready"
	I1228 07:32:38.267750    3796 pod_ready.go:86] duration metric: took 401.2091ms for pod "kube-scheduler-custom-flannel-410600" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:32:38.267750    3796 pod_ready.go:40] duration metric: took 1.6087751s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:32:38.363625    3796 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 07:32:38.367066    3796 out.go:179] * Done! kubectl is now configured to use "custom-flannel-410600" cluster and "default" namespace by default
	I1228 07:32:34.691862    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:34.695835    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:34.699577    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:34.733063    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:34.736640    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:34.771333    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:34.775071    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:34.804573    9412 logs.go:282] 0 containers: []
	W1228 07:32:34.804653    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:34.808531    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:34.838439    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:34.842499    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:34.871804    9412 logs.go:282] 0 containers: []
	W1228 07:32:34.871804    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:34.876452    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:34.906586    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:34.911227    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:34.945803    9412 logs.go:282] 0 containers: []
	W1228 07:32:34.945852    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:34.949662    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:34.983186    9412 logs.go:282] 0 containers: []
	W1228 07:32:34.983186    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:34.983186    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:34.983186    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:35.096950    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:35.096950    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:35.135826    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:35.135826    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:35.218656    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:35.218656    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:35.218656    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:35.261876    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:35.261876    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:35.298827    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:35.298870    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:35.330141    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:35.330141    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:35.379815    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:35.379815    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:35.416394    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:35.416394    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:35.465022    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:35.465022    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:35.514401    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:35.514401    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:35.561888    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:35.561888    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:38.119705    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:38.122365    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:38.126473    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:38.159925    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:38.163659    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:38.192702    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:38.198947    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:38.227236    9412 logs.go:282] 0 containers: []
	W1228 07:32:38.227236    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:38.230909    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:38.260681    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:38.264494    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:38.298834    9412 logs.go:282] 0 containers: []
	W1228 07:32:38.298834    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:38.302774    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:38.334358    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:38.338271    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:38.367775    9412 logs.go:282] 0 containers: []
	W1228 07:32:38.367775    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:38.372280    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:38.403005    9412 logs.go:282] 0 containers: []
	W1228 07:32:38.403005    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:38.403005    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:38.403005    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:38.446988    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:38.446988    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:38.535694    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:38.535694    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:38.535779    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:38.587203    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:38.587203    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:38.622761    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:38.622761    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:38.679711    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:38.679711    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:38.713714    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:38.713714    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:38.762716    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:38.762716    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:38.868241    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:38.868241    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:38.910278    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:38.910278    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:38.960379    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:38.960379    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:39.004909    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:39.004909    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:36.651370   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:36.653362   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:36.656622   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:36.694480   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:32:36.697835   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:36.729399   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:36.732898   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:36.761306   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:36.765333   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:36.794200   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:36.797483   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:36.828792   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:36.832228   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:36.861668   10604 logs.go:282] 2 containers: [43bc05da5b6a 0c08327d6047]
	I1228 07:32:36.865406   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:36.893592   10604 logs.go:282] 0 containers: []
	W1228 07:32:36.893592   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:36.897093   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:36.928438   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:36.928438   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:36.928438   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:37.039472   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:37.039472   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:37.084718   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:37.084718   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:37.165480   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:37.165523   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:37.165584   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:32:37.206413   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:37.206413   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:37.249077   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:37.249077   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:37.285676   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:37.285676   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:37.324026   10604 logs.go:123] Gathering logs for kube-controller-manager [0c08327d6047] ...
	I1228 07:32:37.324093   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c08327d6047"
	I1228 07:32:37.358007   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:37.358007   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:37.439902   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:37.439902   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:37.477952   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:37.477952   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:37.517394   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:37.517457   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:37.550744   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:37.550744   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:37.607064   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:37.607064   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:41.539814    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:41.544809    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:41.549817    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:41.586809    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:41.589802    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:41.629825    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:41.634810    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:41.667801    9412 logs.go:282] 0 containers: []
	W1228 07:32:41.667801    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:41.670799    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:41.710818    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:41.715818    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:41.752808    9412 logs.go:282] 0 containers: []
	W1228 07:32:41.752808    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:41.757806    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:41.793808    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:41.797810    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:41.840818    9412 logs.go:282] 0 containers: []
	W1228 07:32:41.840818    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:41.846814    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:41.884783    9412 logs.go:282] 0 containers: []
	W1228 07:32:41.884783    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:41.884783    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:41.884783    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:41.949787    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:41.949787    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:42.007784    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:42.007784    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:42.048779    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:42.048779    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:42.111783    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:42.111783    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:42.245789    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:42.245789    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:42.286967    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:42.286967    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:42.336979    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:42.336979    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:42.387965    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:42.387965    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:42.437003    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:42.437003    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:42.487971    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:42.487971    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:42.533975    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:42.534975    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:42.628990    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:40.169615   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:40.173373   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:40.179009   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:40.214371   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:32:40.218552   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:40.249988   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:40.252972   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:40.281972   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:40.284968   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:40.316387   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:40.320252   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:40.355124   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:40.358800   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:40.390451   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:32:40.396855   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:40.429233   10604 logs.go:282] 0 containers: []
	W1228 07:32:40.429282   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:40.432943   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:40.463154   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:40.463154   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:40.463154   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:40.500815   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:40.500815   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:40.543894   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:40.543942   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:40.628748   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:40.628748   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:40.665350   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:40.665350   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:40.707128   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:40.707128   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:40.743117   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:40.743117   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:40.774821   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:40.774890   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:40.837554   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:40.837554   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:40.961744   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:40.961744   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:41.060804   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:41.060804   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:41.060804   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:32:41.102798   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:41.102798   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:41.153803   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:41.153803   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:43.725625   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:43.728343   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:43.733215   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:43.766285   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:32:43.770245   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:43.803287   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:43.808189   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:43.837296   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:43.842898   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:43.874020   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:43.878976   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:43.910442   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:43.915820   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:43.952178   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:32:43.957306   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:43.993584   10604 logs.go:282] 0 containers: []
	W1228 07:32:43.993665   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:43.999086   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:44.034418   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:44.034533   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:44.034533   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:44.102342   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:44.102342   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:44.172355   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:44.172355   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:45.129465    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:45.131877    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:45.135897    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:45.170098    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:45.173635    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:45.207373    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:45.210892    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:45.242113    9412 logs.go:282] 0 containers: []
	W1228 07:32:45.242113    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:45.246146    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:45.279937    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:45.283339    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:45.312491    9412 logs.go:282] 0 containers: []
	W1228 07:32:45.312491    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:45.315498    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:45.345668    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:45.349812    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:45.378991    9412 logs.go:282] 0 containers: []
	W1228 07:32:45.378991    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:45.381984    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:45.412920    9412 logs.go:282] 0 containers: []
	W1228 07:32:45.412920    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:45.412920    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:45.412920    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:45.446780    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:45.446780    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:45.489777    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:45.489777    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:45.522768    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:45.522768    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:45.626079    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:45.626079    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:45.666156    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:45.666156    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:45.709378    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:45.709378    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:45.754192    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:45.754192    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:45.785912    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:45.785912    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:45.841247    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:45.841247    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:45.928350    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:45.928392    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:45.928457    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:45.977774    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:45.977774    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:48.518834    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:48.521798    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:48.524994    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:48.556185    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:48.559888    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:48.586312    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:48.590423    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:48.619914    9412 logs.go:282] 0 containers: []
	W1228 07:32:48.619972    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:48.623409    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:48.654971    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:48.658298    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:48.687103    9412 logs.go:282] 0 containers: []
	W1228 07:32:48.687183    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:48.690335    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:48.721746    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:48.725091    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:48.753266    9412 logs.go:282] 0 containers: []
	W1228 07:32:48.753294    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:48.757744    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:48.786791    9412 logs.go:282] 0 containers: []
	W1228 07:32:48.786826    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:48.786857    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:48.786879    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:48.888939    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:48.888939    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:48.928683    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:48.928683    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:48.965672    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:48.965672    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:49.012265    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:49.012265    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:49.067511    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:49.067558    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1228 07:32:44.287750   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:44.287750   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:44.328738   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:44.328738   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:44.417670   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:44.417737   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:44.417767   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:44.468820   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:44.468820   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:44.508462   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:44.508532   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:44.543180   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:44.543180   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:44.578957   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:44.579001   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:44.613982   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:44.613982   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:32:44.653549   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:44.653549   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:44.737275   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:44.737275   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:47.275899   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:47.278485   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:47.282197   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:47.314774   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:32:47.318418   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:47.351068   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:47.354358   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:47.387687   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:47.391953   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:47.424273   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:47.429348   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:47.463147   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:47.466133   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:47.497143   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:32:47.503431   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:47.532256   10604 logs.go:282] 0 containers: []
	W1228 07:32:47.532256   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:47.536001   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:47.569078   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:47.569158   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:47.569213   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:47.609203   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:47.609203   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:47.687439   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:47.687439   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:47.724347   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:47.724347   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:47.761026   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:47.761091   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:47.795931   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:47.795931   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:47.880109   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:47.880638   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:47.880638   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:47.917746   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:47.917746   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:47.950715   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:47.950715   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:48.008728   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:48.008728   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:48.073163   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:48.073163   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:48.183589   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:48.183589   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:48.223019   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:48.223019   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	W1228 07:32:49.151703    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:49.151703    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:49.151703    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:49.191580    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:49.191580    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:49.222953    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:49.222953    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:49.265750    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:49.265750    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:49.297695    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:49.297695    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:49.343657    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:49.343657    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:51.877708    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:51.881313    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:51.884843    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:51.916490    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:51.919718    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:51.953573    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:51.957848    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:51.982255    9412 logs.go:282] 0 containers: []
	W1228 07:32:51.982255    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:51.985690    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:52.018996    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:52.023555    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:52.052065    9412 logs.go:282] 0 containers: []
	W1228 07:32:52.052065    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:52.055730    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:52.112688    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:52.116688    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:52.142168    9412 logs.go:282] 0 containers: []
	W1228 07:32:52.142168    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:52.146272    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:52.175643    9412 logs.go:282] 0 containers: []
	W1228 07:32:52.175720    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:52.175720    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:52.175720    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:52.217643    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:52.217643    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:52.298502    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:52.298502    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:52.298502    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:52.355067    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:52.355067    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:52.399866    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:52.399866    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:52.432997    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:52.432997    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:52.465740    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:52.465740    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:52.513532    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:52.513532    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:52.546478    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:52.546478    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:52.602131    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:52.602658    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:52.703458    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:52.703458    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:52.741841    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:52.741841    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:50.762741   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:50.766377   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:50.770714   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:50.802612   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:32:50.806281   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:50.834695   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:50.838379   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:50.868754   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:50.872436   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:50.903920   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:50.907541   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:50.937789   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:50.940919   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:50.970441   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:32:50.973879   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:51.001788   10604 logs.go:282] 0 containers: []
	W1228 07:32:51.001788   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:51.005090   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:51.038898   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:51.038898   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:51.038898   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:51.124935   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:51.124935   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:51.164019   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:51.164081   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:51.223636   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:51.223666   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:51.336140   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:51.336140   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:51.376125   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:51.376125   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:32:51.414516   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:51.414516   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:51.454841   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:51.454841   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:51.489933   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:51.489933   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:51.524714   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:51.524714   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:51.559111   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:51.559111   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:51.617842   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:51.617842   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:51.702048   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:51.702048   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:51.702048   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:54.242601   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:54.245690   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:54.249059   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:54.279208   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:32:54.283120   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:55.287325    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:55.289320    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:55.293317    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:55.325318    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:55.328316    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:55.365602    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:55.368677    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:55.400472    9412 logs.go:282] 0 containers: []
	W1228 07:32:55.400554    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:55.404347    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:55.436923    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:55.439923    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:55.474796    9412 logs.go:282] 0 containers: []
	W1228 07:32:55.475326    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:55.478871    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:55.510788    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:55.513796    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:55.544232    9412 logs.go:282] 0 containers: []
	W1228 07:32:55.544232    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:55.548344    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:55.578135    9412 logs.go:282] 0 containers: []
	W1228 07:32:55.578135    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:55.578135    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:55.578135    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:55.656708    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:55.656708    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:55.656708    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:55.696915    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:55.696985    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:55.745179    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:55.745179    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:55.792357    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:55.792419    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:55.828104    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:55.828104    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:55.872879    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:55.873452    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:55.926550    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:55.926550    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:55.965357    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:55.965357    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:55.998705    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:55.998705    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:56.045920    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:56.045971    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:56.154626    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:56.154626    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:58.694439    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:32:58.697609    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": EOF
	I1228 07:32:58.701773    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:58.733072    9412 logs.go:282] 2 containers: [abfb381d03bd bfa8dc267780]
	I1228 07:32:58.737490    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:58.768690    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:32:58.772869    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:58.802140    9412 logs.go:282] 0 containers: []
	W1228 07:32:58.802140    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:32:58.806340    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:58.837843    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:32:58.841741    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:58.869548    9412 logs.go:282] 0 containers: []
	W1228 07:32:58.869548    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:32:58.873251    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:58.904129    9412 logs.go:282] 2 containers: [32d8cc1c272d 67014a6dfb79]
	I1228 07:32:58.907466    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:58.939823    9412 logs.go:282] 0 containers: []
	W1228 07:32:58.939823    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:58.942827    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:58.970142    9412 logs.go:282] 0 containers: []
	W1228 07:32:58.970142    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:32:58.970142    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:58.970142    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:59.075887    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:32:59.075887    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:32:54.316426   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:54.320055   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:54.352323   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:54.356606   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:54.387133   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:54.390659   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:54.423699   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:54.427082   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:54.464135   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:32:54.470320   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:54.502306   10604 logs.go:282] 0 containers: []
	W1228 07:32:54.502306   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:54.506314   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:54.536304   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:54.536304   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:54.536304   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:54.649917   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:54.649917   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:54.741544   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:54.741544   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:54.741544   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:32:54.780505   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:54.781515   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:54.819521   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:54.819521   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:54.854232   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:54.854232   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:54.894643   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:54.894643   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:54.954195   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:54.954195   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:55.038697   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:55.038697   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:55.080692   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:55.080692   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:55.164692   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:55.164692   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:55.202168   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:55.202168   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:55.239950   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:55.240009   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:57.780606   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:32:57.784876   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:32:57.789384   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:32:57.822820   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:32:57.827013   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:32:57.857216   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:32:57.860504   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:32:57.893722   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:32:57.896867   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:32:57.930440   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:32:57.933765   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:32:57.963950   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:32:57.967964   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:32:57.998671   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:32:58.002100   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:32:58.031484   10604 logs.go:282] 0 containers: []
	W1228 07:32:58.031484   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:32:58.037111   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:32:58.070232   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:32:58.070232   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:32:58.070232   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:32:58.153786   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:32:58.153786   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:32:58.190221   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:58.190276   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:58.229139   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:32:58.229139   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:32:58.275036   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:32:58.275036   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:32:58.310335   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:32:58.310430   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:32:58.344970   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:32:58.344970   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:32:58.378942   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:32:58.378942   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:32:58.413167   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:58.413215   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:58.471534   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:32:58.471534   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:32:58.535970   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:32:58.535970   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:32:58.651111   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:58.651111   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:58.740988   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:58.740988   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:32:58.740988   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:32:59.117574    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:32:59.117614    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:32:59.159334    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:32:59.159334    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:32:59.190306    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:32:59.190306    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:32:59.231553    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:32:59.231553    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:32:59.263799    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:32:59.263799    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:32:59.300858    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:32:59.300858    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:32:59.386393    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:32:59.386393    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:32:59.386475    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:32:59.434510    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:32:59.434510    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:32:59.466339    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:32:59.466339    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:32:59.509802    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:32:59.509802    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:33:02.067115    9412 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55937/healthz ...
	I1228 07:33:01.282408   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:33:01.285962   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:33:01.292258   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:33:01.324364   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:33:01.327363   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:33:01.360370   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:33:01.363372   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:33:01.407509   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:33:01.414052   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:33:01.450291   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:33:01.455684   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:33:01.489965   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:33:01.494775   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:33:01.533453   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:33:01.537417   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:33:01.578860   10604 logs.go:282] 0 containers: []
	W1228 07:33:01.578860   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:33:01.584858   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:33:01.620537   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:33:01.620537   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:33:01.620537   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:33:01.667348   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:33:01.667348   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:33:01.766007   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:33:01.766007   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:33:01.766007   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:33:01.813225   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:33:01.813225   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:33:01.844937   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:33:01.844978   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:33:01.907673   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:33:01.907673   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:33:01.974691   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:33:01.974691   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:33:02.093701   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:33:02.093701   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:33:02.144153   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:33:02.144209   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:33:02.185057   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:33:02.185131   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:33:02.277999   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:33:02.277999   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:33:02.325523   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:33:02.325604   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:33:02.363163   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:33:02.363163   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:33:07.067705    9412 api_server.go:315] stopped: https://127.0.0.1:55937/healthz: Get "https://127.0.0.1:55937/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1228 07:33:07.071312    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:33:07.108587    9412 logs.go:282] 3 containers: [e0969ef423f5 abfb381d03bd bfa8dc267780]
	I1228 07:33:07.112594    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:33:07.147610    9412 logs.go:282] 1 containers: [94cc14c728d5]
	I1228 07:33:07.150604    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:33:07.208598    9412 logs.go:282] 0 containers: []
	W1228 07:33:07.208598    9412 logs.go:284] No container was found matching "coredns"
	I1228 07:33:07.212605    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:33:07.244603    9412 logs.go:282] 2 containers: [3ddeeca293a6 47ffeb4b853d]
	I1228 07:33:07.247592    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:33:07.296592    9412 logs.go:282] 0 containers: []
	W1228 07:33:07.296592    9412 logs.go:284] No container was found matching "kube-proxy"
	I1228 07:33:07.300597    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:33:07.329593    9412 logs.go:282] 3 containers: [3705298ac526 32d8cc1c272d 67014a6dfb79]
	I1228 07:33:07.332605    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:33:07.359592    9412 logs.go:282] 0 containers: []
	W1228 07:33:07.359592    9412 logs.go:284] No container was found matching "kindnet"
	I1228 07:33:07.362601    9412 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:33:07.397952    9412 logs.go:282] 0 containers: []
	W1228 07:33:07.397952    9412 logs.go:284] No container was found matching "storage-provisioner"
	I1228 07:33:07.398015    9412 logs.go:123] Gathering logs for container status ...
	I1228 07:33:07.398015    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:33:07.459458    9412 logs.go:123] Gathering logs for kubelet ...
	I1228 07:33:07.459458    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:33:07.575337    9412 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:33:07.575337    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1228 07:33:04.903760   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:33:04.907315   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:33:04.910798   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:33:04.943140   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:33:04.947664   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:33:04.987440   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:33:04.993075   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:33:05.032595   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:33:05.036587   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:33:05.066592   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:33:05.070588   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:33:05.108601   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:33:05.113391   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:33:05.145803   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:33:05.149413   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:33:05.183486   10604 logs.go:282] 0 containers: []
	W1228 07:33:05.183529   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:33:05.187761   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:33:05.228335   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:33:05.228335   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:33:05.228335   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:33:05.328891   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:33:05.328929   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:33:05.328975   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:33:05.367082   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:33:05.367082   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:33:05.465938   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:33:05.465938   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:33:05.501207   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:33:05.501260   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:33:05.547819   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:33:05.547819   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:33:05.596534   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:33:05.596573   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:33:05.650842   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:33:05.650890   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:33:05.694809   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:33:05.694809   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:33:05.740100   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:33:05.740139   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:33:05.795205   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:33:05.795256   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:33:05.858465   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:33:05.858465   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:33:05.929975   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:33:05.929993   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:33:08.558102   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:33:08.560642   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:33:08.564060   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:33:08.599311   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:33:08.603069   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:33:08.640669   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:33:08.643665   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:33:08.673229   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:33:08.676231   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:33:08.710215   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:33:08.714216   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:33:08.744785   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:33:08.748822   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:33:08.782780   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:33:08.785784   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:33:08.813777   10604 logs.go:282] 0 containers: []
	W1228 07:33:08.813777   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:33:08.817788   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:33:08.850028   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:33:08.850993   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:33:08.850993   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:33:08.881994   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:33:08.881994   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:33:08.922088   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:33:08.922088   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:33:09.013854   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:33:09.013854   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:33:09.013854   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:33:09.047845   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:33:09.047845   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:33:09.079848   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:33:09.079848   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:33:09.115856   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:33:09.116853   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:33:09.174852   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:33:09.174852   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:33:09.239665   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:33:09.240194   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:33:09.351929   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:33:09.351929   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:33:09.403187   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:33:09.403366   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:33:09.449437   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:33:09.449437   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:33:09.536431   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:33:09.536431   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:33:12.071916   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:33:12.074461   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:33:12.078596   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1228 07:33:12.116281   10604 logs.go:282] 1 containers: [293c56278bf9]
	I1228 07:33:12.120464   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1228 07:33:12.155593   10604 logs.go:282] 1 containers: [86effdc549fd]
	I1228 07:33:12.158150   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1228 07:33:12.188271   10604 logs.go:282] 1 containers: [51d165bee2b3]
	I1228 07:33:12.191266   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1228 07:33:12.229990   10604 logs.go:282] 2 containers: [7e91a1649939 3e6ad5f26302]
	I1228 07:33:12.233697   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1228 07:33:12.267184   10604 logs.go:282] 1 containers: [54081c44d8cf]
	I1228 07:33:12.270746   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1228 07:33:12.309460   10604 logs.go:282] 1 containers: [43bc05da5b6a]
	I1228 07:33:12.314447   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1228 07:33:12.352557   10604 logs.go:282] 0 containers: []
	W1228 07:33:12.352557   10604 logs.go:284] No container was found matching "kindnet"
	I1228 07:33:12.356655   10604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1228 07:33:12.390251   10604 logs.go:282] 1 containers: [8391aeb4a821]
	I1228 07:33:12.390295   10604 logs.go:123] Gathering logs for kubelet ...
	I1228 07:33:12.390351   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:33:12.511379   10604 logs.go:123] Gathering logs for dmesg ...
	I1228 07:33:12.511460   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:33:12.558772   10604 logs.go:123] Gathering logs for coredns [51d165bee2b3] ...
	I1228 07:33:12.558821   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d165bee2b3"
	I1228 07:33:12.596138   10604 logs.go:123] Gathering logs for kube-scheduler [7e91a1649939] ...
	I1228 07:33:12.596138   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e91a1649939"
	I1228 07:33:12.700948   10604 logs.go:123] Gathering logs for kube-proxy [54081c44d8cf] ...
	I1228 07:33:12.700948   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54081c44d8cf"
	I1228 07:33:12.742500   10604 logs.go:123] Gathering logs for kube-controller-manager [43bc05da5b6a] ...
	I1228 07:33:12.742500   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bc05da5b6a"
	I1228 07:33:12.783863   10604 logs.go:123] Gathering logs for Docker ...
	I1228 07:33:12.783863   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:33:12.849122   10604 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:33:12.850122   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:33:12.950535   10604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:33:12.950535   10604 logs.go:123] Gathering logs for kube-apiserver [293c56278bf9] ...
	I1228 07:33:12.950535   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 293c56278bf9"
	I1228 07:33:12.990591   10604 logs.go:123] Gathering logs for etcd [86effdc549fd] ...
	I1228 07:33:12.990591   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86effdc549fd"
	I1228 07:33:13.034851   10604 logs.go:123] Gathering logs for kube-scheduler [3e6ad5f26302] ...
	I1228 07:33:13.034851   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e6ad5f26302"
	I1228 07:33:13.079400   10604 logs.go:123] Gathering logs for storage-provisioner [8391aeb4a821] ...
	I1228 07:33:13.079400   10604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8391aeb4a821"
	I1228 07:33:13.125601   10604 logs.go:123] Gathering logs for container status ...
	I1228 07:33:13.125658   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:33:17.678221    9412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.102726s)
	W1228 07:33:17.678221    9412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1228 07:33:17.678221    9412 logs.go:123] Gathering logs for kube-apiserver [e0969ef423f5] ...
	I1228 07:33:17.678221    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0969ef423f5"
	I1228 07:33:17.720218    9412 logs.go:123] Gathering logs for kube-apiserver [abfb381d03bd] ...
	I1228 07:33:17.720218    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfb381d03bd"
	I1228 07:33:17.765220    9412 logs.go:123] Gathering logs for kube-apiserver [bfa8dc267780] ...
	I1228 07:33:17.765220    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa8dc267780"
	I1228 07:33:17.812223    9412 logs.go:123] Gathering logs for etcd [94cc14c728d5] ...
	I1228 07:33:17.812223    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94cc14c728d5"
	I1228 07:33:17.854228    9412 logs.go:123] Gathering logs for kube-controller-manager [67014a6dfb79] ...
	I1228 07:33:17.854228    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67014a6dfb79"
	I1228 07:33:17.896222    9412 logs.go:123] Gathering logs for Docker ...
	I1228 07:33:17.896222    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:33:17.929225    9412 logs.go:123] Gathering logs for dmesg ...
	I1228 07:33:17.929225    9412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:33:17.972694    9412 logs.go:123] Gathering logs for kube-scheduler [3ddeeca293a6] ...
	I1228 07:33:17.972694    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ddeeca293a6"
	I1228 07:33:18.003695    9412 logs.go:123] Gathering logs for kube-scheduler [47ffeb4b853d] ...
	I1228 07:33:18.003695    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47ffeb4b853d"
	I1228 07:33:18.045720    9412 logs.go:123] Gathering logs for kube-controller-manager [3705298ac526] ...
	I1228 07:33:18.045720    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3705298ac526"
	I1228 07:33:18.079336    9412 logs.go:123] Gathering logs for kube-controller-manager [32d8cc1c272d] ...
	I1228 07:33:18.079336    9412 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32d8cc1c272d"
	I1228 07:33:15.713563   10604 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:55731/healthz ...
	I1228 07:33:15.715769   10604 api_server.go:315] stopped: https://127.0.0.1:55731/healthz: Get "https://127.0.0.1:55731/healthz": EOF
	I1228 07:33:15.715769   10604 kubeadm.go:602] duration metric: took 4m12.2846547s to restartPrimaryControlPlane
	W1228 07:33:15.715769   10604 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1228 07:33:15.720675   10604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1228 07:33:17.751237   10604 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.0305295s)
	I1228 07:33:17.755227   10604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:33:17.777219   10604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:33:17.791225   10604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:33:17.795226   10604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:33:17.810222   10604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:33:17.810222   10604 kubeadm.go:158] found existing configuration files:
	
	I1228 07:33:17.815228   10604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:33:17.829233   10604 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:33:17.833231   10604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:33:17.851228   10604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:33:17.865233   10604 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:33:17.869229   10604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:33:17.888225   10604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:33:17.901224   10604 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:33:17.905230   10604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:33:17.924223   10604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:33:17.938626   10604 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:33:17.944095   10604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:33:17.961690   10604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:33:18.036693   10604 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1228 07:33:18.044698   10604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1228 07:33:18.147383   10604 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:33:20.135976    9696 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:33:20.136053    9696 kubeadm.go:319] 
	I1228 07:33:20.136251    9696 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:33:20.140165    9696 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:33:20.140165    9696 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:33:20.140790    9696 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:33:20.141205    9696 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1228 07:33:20.141418    9696 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1228 07:33:20.141589    9696 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1228 07:33:20.141692    9696 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1228 07:33:20.141848    9696 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1228 07:33:20.141934    9696 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1228 07:33:20.142089    9696 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1228 07:33:20.142304    9696 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1228 07:33:20.142481    9696 kubeadm.go:319] CONFIG_INET: enabled
	I1228 07:33:20.142692    9696 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1228 07:33:20.142882    9696 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1228 07:33:20.143162    9696 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1228 07:33:20.143358    9696 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1228 07:33:20.143723    9696 kubeadm.go:319] OS: Linux
	I1228 07:33:20.143723    9696 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:33:20.145178    9696 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:33:20.145323    9696 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:33:20.145323    9696 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:33:20.145516    9696 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:33:20.145650    9696 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:33:20.145725    9696 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:33:20.145808    9696 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:33:20.145913    9696 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:33:20.146098    9696 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:33:20.146347    9696 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:33:20.146347    9696 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:33:20.146347    9696 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:33:20.150327    9696 out.go:252]   - Generating certificates and keys ...
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:33:20.150327    9696 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:33:20.151389    9696 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:33:20.151389    9696 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:33:20.151389    9696 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:33:20.151389    9696 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:33:20.151931    9696 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:33:20.152007    9696 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:33:20.152592    9696 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:33:20.152592    9696 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:33:20.155905    9696 out.go:252]   - Booting up control plane ...
	I1228 07:33:20.155905    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:33:20.155905    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:33:20.156471    9696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:33:20.156471    9696 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:33:20.156471    9696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:33:20.157067    9696 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:33:20.157067    9696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:33:20.157067    9696 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:33:20.157067    9696 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:33:20.157067    9696 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:33:20.157067    9696 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001192s
	I1228 07:33:20.157067    9696 kubeadm.go:319] 
	I1228 07:33:20.157067    9696 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:33:20.157067    9696 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:33:20.158037    9696 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:33:20.158037    9696 kubeadm.go:319] 
	I1228 07:33:20.158037    9696 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:33:20.158037    9696 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:33:20.158037    9696 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:33:20.158037    9696 kubeadm.go:319] 
	I1228 07:33:20.158037    9696 kubeadm.go:403] duration metric: took 8m4.1250829s to StartCluster
	I1228 07:33:20.161461    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.184099    9696 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.188880    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.207224    9696 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.211101    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.230703    9696 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.236066    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.254817    9696 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.259912    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.285037    9696 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.290304    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.311093    9696 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.316667    9696 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:33:20.336663    9696 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:33:20Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:33:20.336663    9696 logs.go:123] Gathering logs for kubelet ...
	I1228 07:33:20.336715    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:33:20.412171    9696 logs.go:123] Gathering logs for dmesg ...
	I1228 07:33:20.412171    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:33:20.466690    9696 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:33:20.466690    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:33:20.553679    9696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:33:20.542100   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.543002   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.545649   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.547998   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.548784   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:33:20.542100   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.543002   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.545649   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.547998   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:20.548784   10286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:33:20.553679    9696 logs.go:123] Gathering logs for Docker ...
	I1228 07:33:20.553679    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:33:20.595325    9696 logs.go:123] Gathering logs for container status ...
	I1228 07:33:20.595325    9696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1228 07:33:20.661334    9696 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001192s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:33:20.662333    9696 out.go:285] * 
	W1228 07:33:20.662333    9696 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001192s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:33:20.662333    9696 out.go:285] * 
	W1228 07:33:20.662333    9696 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:33:20.669336    9696 out.go:203] 
	W1228 07:33:20.674317    9696 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001192s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:33:20.674317    9696 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:33:20.674317    9696 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:33:20.685326    9696 out.go:203] 
	
	
	==> Docker <==
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.225954314Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.226033422Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.226043023Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.226048023Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.226055224Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.226078226Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.226114729Z" level=info msg="Initializing buildkit"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.336509909Z" level=info msg="Completed buildkit initialization"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.344801981Z" level=info msg="Daemon has completed initialization"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.344964896Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.345014100Z" level=info msg="API listen on [::]:2376"
	Dec 28 07:25:13 force-systemd-env-970200 dockerd[1198]: time="2025-12-28T07:25:13.344965496Z" level=info msg="API listen on /run/docker.sock"
	Dec 28 07:25:13 force-systemd-env-970200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 28 07:25:14 force-systemd-env-970200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Start docker client with request timeout 0s"
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Loaded network plugin cni"
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Setting cgroupDriver systemd"
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 28 07:25:14 force-systemd-env-970200 cri-dockerd[1493]: time="2025-12-28T07:25:14Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 28 07:25:14 force-systemd-env-970200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:33:23.690043   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:23.690768   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:23.693154   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:23.694051   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:33:23.695536   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.488927] CPU: 13 PID: 298968 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f4c1ebe9b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f4c1ebe9af6.
	[  +0.000001] RSP: 002b:00007ffd4c961510 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec28 07:32] CPU: 4 PID: 299261 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f3d40a74b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f3d40a74af6.
	[  +0.000001] RSP: 002b:00007ffc74e11250 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.939421] tmpfs: Unknown parameter 'noswap'
	[  +7.585326] tmpfs: Unknown parameter 'noswap'
	[Dec28 07:33] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:33:23 up  1:58,  0 user,  load average: 3.50, 3.54, 3.03
	Linux force-systemd-env-970200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:33:20 force-systemd-env-970200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:33:21 force-systemd-env-970200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 28 07:33:21 force-systemd-env-970200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:33:21 force-systemd-env-970200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:33:21 force-systemd-env-970200 kubelet[10334]: E1228 07:33:21.526284   10334 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:33:21 force-systemd-env-970200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:33:21 force-systemd-env-970200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:33:22 force-systemd-env-970200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 28 07:33:22 force-systemd-env-970200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:33:22 force-systemd-env-970200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:33:22 force-systemd-env-970200 kubelet[10376]: E1228 07:33:22.261677   10376 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:33:22 force-systemd-env-970200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:33:22 force-systemd-env-970200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:33:22 force-systemd-env-970200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 28 07:33:22 force-systemd-env-970200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:33:22 force-systemd-env-970200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:33:23 force-systemd-env-970200 kubelet[10402]: E1228 07:33:23.038300   10402 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:33:23 force-systemd-env-970200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:33:23 force-systemd-env-970200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:33:23 force-systemd-env-970200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 28 07:33:23 force-systemd-env-970200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:33:23 force-systemd-env-970200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:33:23 force-systemd-env-970200 kubelet[10519]: E1228 07:33:23.754880   10519 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:33:23 force-systemd-env-970200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:33:23 force-systemd-env-970200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-env-970200 -n force-systemd-env-970200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-env-970200 -n force-systemd-env-970200: exit status 6 (617.1498ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:33:26.226605    1816 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-970200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-970200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-970200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-970200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-970200: (3.3267464s)
--- FAIL: TestForceSystemdEnv (523.70s)

                                                
                                    
x
+
TestErrorSpam/setup (42.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-057300 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-057300 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 --driver=docker: (42.7327139s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-057300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22352
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-057300" primary control-plane node in "nospam-057300" cluster
* Pulling base image v0.0.48-1766884053-22351 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-057300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (42.73s)

                                                
                                    

Test pass (319/349)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.13
4 TestDownloadOnly/v1.28.0/preload-exists 0.04
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.3
9 TestDownloadOnly/v1.28.0/DeleteAll 1.12
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.9
12 TestDownloadOnly/v1.35.0/json-events 5.56
13 TestDownloadOnly/v1.35.0/preload-exists 0
16 TestDownloadOnly/v1.35.0/kubectl 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.26
18 TestDownloadOnly/v1.35.0/DeleteAll 1.03
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.45
20 TestDownloadOnlyKic 1.68
21 TestBinaryMirror 2.51
22 TestOffline 128.25
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.32
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.32
27 TestAddons/Setup 284.76
29 TestAddons/serial/Volcano 52.34
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 11.11
36 TestAddons/parallel/RegistryCreds 1.41
38 TestAddons/parallel/InspektorGadget 11.39
39 TestAddons/parallel/MetricsServer 7.59
41 TestAddons/parallel/CSI 51.54
42 TestAddons/parallel/Headlamp 36.45
43 TestAddons/parallel/CloudSpanner 6.99
44 TestAddons/parallel/LocalPath 57.35
45 TestAddons/parallel/NvidiaDevicePlugin 6.85
46 TestAddons/parallel/Yakd 12.9
47 TestAddons/parallel/AmdGpuDevicePlugin 6.86
48 TestAddons/StoppedEnableDisable 12.91
49 TestCertOptions 48.3
50 TestCertExpiration 262.35
51 TestDockerFlags 48.78
59 TestErrorSpam/start 2.44
60 TestErrorSpam/status 2.1
61 TestErrorSpam/pause 2.59
62 TestErrorSpam/unpause 2.5
63 TestErrorSpam/stop 19.65
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 73.15
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 44.22
70 TestFunctional/serial/KubeContext 0.09
71 TestFunctional/serial/KubectlGetPods 0.26
74 TestFunctional/serial/CacheCmd/cache/add_remote 9.96
75 TestFunctional/serial/CacheCmd/cache/add_local 4.18
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
77 TestFunctional/serial/CacheCmd/cache/list 0.19
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.6
79 TestFunctional/serial/CacheCmd/cache/cache_reload 4.43
80 TestFunctional/serial/CacheCmd/cache/delete 0.38
81 TestFunctional/serial/MinikubeKubectlCmd 0.36
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.62
83 TestFunctional/serial/ExtraConfig 47.38
84 TestFunctional/serial/ComponentHealth 0.14
85 TestFunctional/serial/LogsCmd 1.85
86 TestFunctional/serial/LogsFileCmd 1.86
87 TestFunctional/serial/InvalidService 5.08
89 TestFunctional/parallel/ConfigCmd 1.22
91 TestFunctional/parallel/DryRun 1.48
92 TestFunctional/parallel/InternationalLanguage 0.65
93 TestFunctional/parallel/StatusCmd 2.02
98 TestFunctional/parallel/AddonsCmd 0.42
99 TestFunctional/parallel/PersistentVolumeClaim 28.9
101 TestFunctional/parallel/SSHCmd 1.17
102 TestFunctional/parallel/CpCmd 3.46
103 TestFunctional/parallel/MySQL 83.05
104 TestFunctional/parallel/FileSync 0.56
105 TestFunctional/parallel/CertSync 3.21
109 TestFunctional/parallel/NodeLabels 0.14
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
113 TestFunctional/parallel/License 1.7
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.3
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.77
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.51
120 TestFunctional/parallel/ServiceCmd/List 0.87
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.83
122 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
123 TestFunctional/parallel/Version/short 0.17
124 TestFunctional/parallel/Version/components 0.85
125 TestFunctional/parallel/DockerEnv/powershell 5.2
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.32
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.46
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.52
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.46
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.44
133 TestFunctional/parallel/ImageCommands/ImageBuild 10.68
134 TestFunctional/parallel/ImageCommands/Setup 1.57
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.47
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
142 TestFunctional/parallel/ServiceCmd/Format 15.01
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.88
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.8
145 TestFunctional/parallel/ProfileCmd/profile_not_create 1.03
146 TestFunctional/parallel/ProfileCmd/profile_list 0.94
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.92
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.7
149 TestFunctional/parallel/ImageCommands/ImageRemove 1
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.21
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.85
152 TestFunctional/parallel/ServiceCmd/URL 15.01
153 TestFunctional/delete_echo-server_images 0.15
154 TestFunctional/delete_my-image_image 0.06
155 TestFunctional/delete_minikube_cached_images 0.06
160 TestMultiControlPlane/serial/StartCluster 216.5
161 TestMultiControlPlane/serial/DeployApp 9.43
162 TestMultiControlPlane/serial/PingHostFromPods 2.5
163 TestMultiControlPlane/serial/AddWorkerNode 53.93
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.94
166 TestMultiControlPlane/serial/CopyFile 32.94
167 TestMultiControlPlane/serial/StopSecondaryNode 13.45
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.54
169 TestMultiControlPlane/serial/RestartSecondaryNode 49.92
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.95
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 201.55
172 TestMultiControlPlane/serial/DeleteSecondaryNode 14.19
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.53
174 TestMultiControlPlane/serial/StopCluster 37.53
175 TestMultiControlPlane/serial/RestartCluster 72.96
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.52
177 TestMultiControlPlane/serial/AddSecondaryNode 98.69
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.98
181 TestImageBuild/serial/Setup 47.4
182 TestImageBuild/serial/NormalBuild 4.44
183 TestImageBuild/serial/BuildWithBuildArg 2.18
184 TestImageBuild/serial/BuildWithDockerIgnore 1.26
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.29
190 TestJSONOutput/start/Command 85.96
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 1.13
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.88
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 12.32
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.64
215 TestKicCustomNetwork/create_custom_network 51.07
216 TestKicCustomNetwork/use_default_bridge_network 50.42
217 TestKicExistingNetwork 49.73
218 TestKicCustomSubnet 50.55
219 TestKicStaticIP 50.48
220 TestMainNoArgs 0.16
221 TestMinikubeProfile 93.52
224 TestMountStart/serial/StartWithMountFirst 13.71
225 TestMountStart/serial/VerifyMountFirst 0.55
226 TestMountStart/serial/StartWithMountSecond 13.46
227 TestMountStart/serial/VerifyMountSecond 0.54
228 TestMountStart/serial/DeleteFirst 2.45
229 TestMountStart/serial/VerifyMountPostDelete 0.53
230 TestMountStart/serial/Stop 1.87
231 TestMountStart/serial/RestartStopped 10.56
232 TestMountStart/serial/VerifyMountPostStop 0.52
235 TestMultiNode/serial/FreshStart2Nodes 125.2
236 TestMultiNode/serial/DeployApp2Nodes 6.89
237 TestMultiNode/serial/PingHostFrom2Pods 1.72
238 TestMultiNode/serial/AddNode 53.47
239 TestMultiNode/serial/MultiNodeLabels 0.13
240 TestMultiNode/serial/ProfileList 1.32
241 TestMultiNode/serial/CopyFile 18.89
242 TestMultiNode/serial/StopNode 3.79
243 TestMultiNode/serial/StartAfterStop 13.06
244 TestMultiNode/serial/RestartKeepsNodes 79.66
245 TestMultiNode/serial/DeleteNode 8.09
246 TestMultiNode/serial/StopMultiNode 23.9
247 TestMultiNode/serial/RestartMultiNode 60.05
248 TestMultiNode/serial/ValidateNameConflict 46.82
253 TestScheduledStopWindows 107.91
257 TestInsufficientStorage 28.99
258 TestRunningBinaryUpgrade 351.67
260 TestKubernetesUpgrade 394.33
261 TestMissingContainerUpgrade 130.14
263 TestStoppedBinaryUpgrade/Setup 0.81
264 TestPause/serial/Start 124.48
265 TestStoppedBinaryUpgrade/Upgrade 378.14
266 TestPause/serial/SecondStartNoReconfiguration 45.48
267 TestPause/serial/Pause 1.13
268 TestPause/serial/VerifyStatus 0.64
269 TestPause/serial/Unpause 0.86
270 TestPause/serial/PauseAgain 1.4
271 TestPause/serial/DeletePaused 4.18
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.22
274 TestNoKubernetes/serial/StartWithK8s 47.95
275 TestPause/serial/VerifyDeletedResources 1.28
276 TestNoKubernetes/serial/StartWithStopK8s 20.53
277 TestNoKubernetes/serial/Start 13.99
278 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.54
280 TestNoKubernetes/serial/ProfileList 3.8
281 TestNoKubernetes/serial/Stop 1.87
282 TestNoKubernetes/serial/StartNoArgs 9.64
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.53
295 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
303 TestPreload/Start-NoPreload-PullImage 116.65
304 TestPreload/Restart-With-Preload-Check-User-Image 49.05
306 TestNetworkPlugins/group/auto/Start 89.31
307 TestNetworkPlugins/group/auto/KubeletFlags 0.56
308 TestNetworkPlugins/group/auto/NetCatPod 15.52
309 TestNetworkPlugins/group/auto/DNS 0.23
310 TestNetworkPlugins/group/auto/Localhost 0.19
311 TestNetworkPlugins/group/auto/HairPin 0.19
312 TestNetworkPlugins/group/custom-flannel/Start 59.43
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.54
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.48
315 TestNetworkPlugins/group/custom-flannel/DNS 0.23
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
317 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
318 TestNetworkPlugins/group/calico/Start 121.62
319 TestNetworkPlugins/group/enable-default-cni/Start 90.12
320 TestNetworkPlugins/group/flannel/Start 91.82
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.67
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.62
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/false/Start 95.14
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.58
326 TestNetworkPlugins/group/flannel/NetCatPod 24.47
327 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
328 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
329 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
330 TestNetworkPlugins/group/calico/ControllerPod 6.01
331 TestNetworkPlugins/group/calico/KubeletFlags 0.57
332 TestNetworkPlugins/group/calico/NetCatPod 15.53
333 TestNetworkPlugins/group/flannel/DNS 0.24
334 TestNetworkPlugins/group/flannel/Localhost 0.24
335 TestNetworkPlugins/group/flannel/HairPin 0.24
336 TestNetworkPlugins/group/calico/DNS 0.3
337 TestNetworkPlugins/group/calico/Localhost 0.23
338 TestNetworkPlugins/group/calico/HairPin 0.35
339 TestNetworkPlugins/group/bridge/Start 80.75
340 TestNetworkPlugins/group/kindnet/Start 79.79
341 TestNetworkPlugins/group/kubenet/Start 84.8
342 TestNetworkPlugins/group/false/KubeletFlags 0.61
343 TestNetworkPlugins/group/false/NetCatPod 14.69
344 TestNetworkPlugins/group/false/DNS 0.27
345 TestNetworkPlugins/group/false/Localhost 0.21
346 TestNetworkPlugins/group/false/HairPin 0.21
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.69
348 TestNetworkPlugins/group/bridge/NetCatPod 14.55
349 TestNetworkPlugins/group/bridge/DNS 0.25
350 TestNetworkPlugins/group/bridge/Localhost 0.21
351 TestNetworkPlugins/group/bridge/HairPin 0.21
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestStartStop/group/old-k8s-version/serial/FirstStart 101.14
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.54
356 TestNetworkPlugins/group/kindnet/NetCatPod 26.48
357 TestNetworkPlugins/group/kubenet/KubeletFlags 0.66
358 TestNetworkPlugins/group/kubenet/NetCatPod 17.63
360 TestStartStop/group/embed-certs/serial/FirstStart 92.55
361 TestNetworkPlugins/group/kindnet/DNS 0.23
362 TestNetworkPlugins/group/kindnet/Localhost 0.21
363 TestNetworkPlugins/group/kindnet/HairPin 0.21
364 TestNetworkPlugins/group/kubenet/DNS 0.25
365 TestNetworkPlugins/group/kubenet/Localhost 0.22
366 TestNetworkPlugins/group/kubenet/HairPin 0.23
368 TestStartStop/group/no-preload/serial/FirstStart 100.87
370 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.62
371 TestStartStop/group/old-k8s-version/serial/DeployApp 12.73
372 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.8
373 TestStartStop/group/old-k8s-version/serial/Stop 12.57
374 TestStartStop/group/embed-certs/serial/DeployApp 9.67
375 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.55
376 TestStartStop/group/old-k8s-version/serial/SecondStart 54.63
377 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.84
378 TestStartStop/group/embed-certs/serial/Stop 12.33
379 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.56
380 TestStartStop/group/embed-certs/serial/SecondStart 56.96
381 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.78
382 TestStartStop/group/no-preload/serial/DeployApp 11.6
383 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.61
384 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.2
385 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.76
386 TestStartStop/group/no-preload/serial/Stop 12.26
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.02
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.3
389 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.52
390 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57.99
391 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.54
392 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.46
393 TestStartStop/group/no-preload/serial/SecondStart 63.23
394 TestStartStop/group/old-k8s-version/serial/Pause 7.45
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.36
398 TestStartStop/group/newest-cni/serial/FirstStart 53.15
399 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.53
400 TestStartStop/group/embed-certs/serial/Pause 8.57
401 TestPreload/PreloadSrc/gcs 6.94
402 TestPreload/PreloadSrc/github 8.72
403 TestPreload/PreloadSrc/gcs-cached 1.79
404 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
405 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.31
406 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.02
407 TestStartStop/group/newest-cni/serial/DeployApp 0
408 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.46
409 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.53
410 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.25
411 TestStartStop/group/newest-cni/serial/Stop 12.53
412 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.28
413 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.64
414 TestStartStop/group/no-preload/serial/Pause 5.42
415 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.51
416 TestStartStop/group/newest-cni/serial/SecondStart 21.92
417 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
418 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
419 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.81
420 TestStartStop/group/newest-cni/serial/Pause 5.12
x
+
TestDownloadOnly/v1.28.0/json-events (7.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-146000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-146000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (7.1299004s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1228 06:28:09.621203   13556 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1228 06:28:09.663429   13556 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-146000
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-146000: exit status 85 (292.4675ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-146000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-146000 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:28:02
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:28:02.557231    6532 out.go:360] Setting OutFile to fd 692 ...
	I1228 06:28:02.600207    6532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:02.600207    6532 out.go:374] Setting ErrFile to fd 696...
	I1228 06:28:02.600207    6532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1228 06:28:02.609923    6532 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1228 06:28:02.617968    6532 out.go:368] Setting JSON to true
	I1228 06:28:02.620260    6532 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3222,"bootTime":1766900060,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1228 06:28:02.620260    6532 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1228 06:28:02.637247    6532 out.go:99] [download-only-146000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	W1228 06:28:02.637789    6532 preload.go:372] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1228 06:28:02.637789    6532 notify.go:221] Checking for updates...
	I1228 06:28:02.640265    6532 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 06:28:02.642091    6532 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1228 06:28:02.644417    6532 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:28:02.645656    6532 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1228 06:28:02.650508    6532 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:28:02.651096    6532 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:28:02.855605    6532 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1228 06:28:02.859119    6532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:03.559294    6532 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-28 06:28:03.538727329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 06:28:03.567293    6532 out.go:99] Using the docker driver based on user configuration
	I1228 06:28:03.567293    6532 start.go:309] selected driver: docker
	I1228 06:28:03.567293    6532 start.go:928] validating driver "docker" against <nil>
	I1228 06:28:03.574171    6532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:03.818091    6532 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-28 06:28:03.799868313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 06:28:03.818091    6532 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:28:03.820461    6532 start_flags.go:417] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1228 06:28:03.821120    6532 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:28:03.825096    6532 out.go:171] Using Docker Desktop driver with root privileges
	I1228 06:28:03.827382    6532 cni.go:84] Creating CNI manager for ""
	I1228 06:28:03.827382    6532 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 06:28:03.827382    6532 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 06:28:03.827382    6532 start.go:353] cluster config:
	{Name:download-only-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:28:03.830952    6532 out.go:99] Starting "download-only-146000" primary control-plane node in "download-only-146000" cluster
	I1228 06:28:03.830952    6532 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 06:28:03.832939    6532 out.go:99] Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:28:03.832939    6532 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1228 06:28:03.832939    6532 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:28:03.879946    6532 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1228 06:28:03.879946    6532 cache.go:65] Caching tarball of preloaded images
	I1228 06:28:03.880957    6532 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1228 06:28:03.884940    6532 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:28:03.884940    6532 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1766884053-22351@sha256_2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1.tar
	I1228 06:28:03.884940    6532 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1228 06:28:03.884940    6532 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1228 06:28:03.884940    6532 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1228 06:28:03.884940    6532 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1766884053-22351@sha256_2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1.tar
	I1228 06:28:03.884940    6532 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 06:28:03.885948    6532 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:28:03.955936    6532 preload.go:313] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1228 06:28:03.957326    6532 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-146000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-146000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (1.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1200068s)
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (1.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-146000
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (5.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-093200 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-093200 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker: (5.5624429s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (5.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1228 06:28:17.540254   13556 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 06:28:17.540254   13556 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
--- PASS: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-093200
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-093200: exit status 85 (251.6677ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-146000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-146000 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │ 28 Dec 25 06:28 UTC │
	│ delete  │ -p download-only-146000                                                                                                                           │ download-only-146000 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │ 28 Dec 25 06:28 UTC │
	│ start   │ -o=json --download-only -p download-only-093200 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker │ download-only-093200 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:28:12
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:28:12.049610    2464 out.go:360] Setting OutFile to fd 812 ...
	I1228 06:28:12.094807    2464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:12.094807    2464 out.go:374] Setting ErrFile to fd 816...
	I1228 06:28:12.094807    2464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:12.109771    2464 out.go:368] Setting JSON to true
	I1228 06:28:12.112257    2464 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3231,"bootTime":1766900060,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1228 06:28:12.112257    2464 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1228 06:28:12.116277    2464 out.go:99] [download-only-093200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1228 06:28:12.116277    2464 notify.go:221] Checking for updates...
	I1228 06:28:12.118310    2464 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 06:28:12.121456    2464 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1228 06:28:12.124044    2464 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:28:12.125214    2464 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1228 06:28:12.129760    2464 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:28:12.130787    2464 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:28:12.256139    2464 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1228 06:28:12.260179    2464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:12.504127    2464 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-28 06:28:12.484246029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 06:28:12.521074    2464 out.go:99] Using the docker driver based on user configuration
	I1228 06:28:12.521074    2464 start.go:309] selected driver: docker
	I1228 06:28:12.521074    2464 start.go:928] validating driver "docker" against <nil>
	I1228 06:28:12.527728    2464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:12.763675    2464 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-28 06:28:12.744764644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 06:28:12.763954    2464 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:28:12.764632    2464 start_flags.go:417] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1228 06:28:12.765405    2464 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:28:13.097508    2464 out.go:171] Using Docker Desktop driver with root privileges
	I1228 06:28:13.110173    2464 cni.go:84] Creating CNI manager for ""
	I1228 06:28:13.110548    2464 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 06:28:13.110607    2464 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 06:28:13.110607    2464 start.go:353] cluster config:
	{Name:download-only-093200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:download-only-093200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:28:13.118863    2464 out.go:99] Starting "download-only-093200" primary control-plane node in "download-only-093200" cluster
	I1228 06:28:13.118863    2464 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 06:28:13.145627    2464 out.go:99] Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:28:13.145627    2464 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:28:13.145993    2464 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 06:28:13.182175    2464 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1228 06:28:13.182249    2464 cache.go:65] Caching tarball of preloaded images
	I1228 06:28:13.182435    2464 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 06:28:13.192549    2464 out.go:99] Downloading Kubernetes v1.35.0 preload ...
	I1228 06:28:13.192549    2464 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1228 06:28:13.192549    2464 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1228 06:28:13.202052    2464 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:28:13.202923    2464 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1766884053-22351@sha256_2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1.tar
	I1228 06:28:13.203177    2464 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1766884053-22351@sha256_2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1.tar
	I1228 06:28:13.203251    2464 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 06:28:13.203496    2464 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory, skipping pull
	I1228 06:28:13.203542    2464 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in cache, skipping pull
	I1228 06:28:13.203631    2464 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 as a tarball
	I1228 06:28:13.254782    2464 preload.go:313] Got checksum from GCS API "c0024de4eb9cf719bc0d5996878f94c1"
	I1228 06:28:13.254919    2464 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4?checksum=md5:c0024de4eb9cf719bc0d5996878f94c1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
	I1228 06:28:16.015110    2464 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 06:28:16.015695    2464 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-093200\config.json ...
	I1228 06:28:16.015847    2464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-093200\config.json: {Name:mk5641ad7c5a5c54a9e9c5e4783e9351e69ad2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:16.016673    2464 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 06:28:16.017527    2464 download.go:114] Downloading: https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.35.0/kubectl.exe
	
	
	* The control-plane node download-only-093200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-093200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (1.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0290299s)
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (1.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-093200
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.45s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.68s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-500300 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-500300 --alsologtostderr --driver=docker: (1.1552502s)
helpers_test.go:176: Cleaning up "download-docker-500300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-500300
--- PASS: TestDownloadOnlyKic (1.68s)

                                                
                                    
x
+
TestBinaryMirror (2.51s)

                                                
                                                
=== RUN   TestBinaryMirror
I1228 06:28:22.155308   13556 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-857600 --alsologtostderr --binary-mirror http://127.0.0.1:51553 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-857600 --alsologtostderr --binary-mirror http://127.0.0.1:51553 --driver=docker: (1.7446331s)
helpers_test.go:176: Cleaning up "binary-mirror-857600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-857600
--- PASS: TestBinaryMirror (2.51s)

                                                
                                    
x
+
TestOffline (128.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-441300 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-441300 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (2m4.3931353s)
helpers_test.go:176: Cleaning up "offline-docker-441300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-441300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-441300: (3.8579935s)
--- PASS: TestOffline (128.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.32s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-045400
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-045400: exit status 85 (323.8412ms)

                                                
                                                
-- stdout --
	* Profile "addons-045400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-045400"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.32s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.32s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-045400
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-045400: exit status 85 (321.4374ms)

                                                
                                                
-- stdout --
	* Profile "addons-045400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-045400"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.32s)

                                                
                                    
x
+
TestAddons/Setup (284.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-045400 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-045400 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m44.7610073s)
--- PASS: TestAddons/Setup (284.76s)

                                                
                                    
x
+
TestAddons/serial/Volcano (52.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 22.3733ms
addons_test.go:870: volcano-scheduler stabilized in 22.3733ms
addons_test.go:878: volcano-admission stabilized in 22.3733ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-74vtl" [019e1ecf-c7d2-4264-898c-506c72021c83] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0071391s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-9krpq" [c90f7ade-a2f2-460f-937c-f360271f6919] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0068195s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-th8jc" [870718f3-8dbc-428c-b175-c1e4b9f9c1fc] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0064534s
addons_test.go:905: (dbg) Run:  kubectl --context addons-045400 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-045400 create -f testdata\vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-045400 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [12e38753-f87d-44e4-b4ad-ff67d9d3fb17] Pending
helpers_test.go:353: "test-job-nginx-0" [12e38753-f87d-44e4-b4ad-ff67d9d3fb17] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [12e38753-f87d-44e4-b4ad-ff67d9d3fb17] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 22.0060107s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable volcano --alsologtostderr -v=1: (12.4761426s)
--- PASS: TestAddons/serial/Volcano (52.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-045400 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-045400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-045400 create -f testdata\busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-045400 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d6183d2a-4998-4db9-a93c-0ede6113d9d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d6183d2a-4998-4db9-a93c-0ede6113d9d9] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005728s
addons_test.go:696: (dbg) Run:  kubectl --context addons-045400 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-045400 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-045400 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-045400 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.11s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 8.1325ms
addons_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-045400
addons_test.go:334: (dbg) Run:  kubectl --context addons-045400 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-pthmq" [5f4e3440-bac5-4f0d-a71f-bf25d3a62a9d] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0065055s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable inspektor-gadget --alsologtostderr -v=1: (6.3778114s)
--- PASS: TestAddons/parallel/InspektorGadget (11.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 9.6714ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-cd2fz" [f9a478a1-9b7b-4a1c-9ef4-f2b9cb4f694f] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0040663s
addons_test.go:465: (dbg) Run:  kubectl --context addons-045400 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable metrics-server --alsologtostderr -v=1: (1.4381127s)
--- PASS: TestAddons/parallel/MetricsServer (7.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1228 06:34:56.442884   13556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1228 06:34:56.451253   13556 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1228 06:34:56.451322   13556 kapi.go:107] duration metric: took 8.452ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.493ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-045400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-045400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [ec2754a0-0e07-4aa3-9e63-62c61cf98632] Pending
helpers_test.go:353: "task-pv-pod" [ec2754a0-0e07-4aa3-9e63-62c61cf98632] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [ec2754a0-0e07-4aa3-9e63-62c61cf98632] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0056467s
addons_test.go:574: (dbg) Run:  kubectl --context addons-045400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-045400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-045400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-045400 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-045400 delete pod task-pv-pod: (1.7444449s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-045400 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-045400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-045400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [0aaa6895-3600-4b96-912b-c9850c19d128] Pending
helpers_test.go:353: "task-pv-pod-restore" [0aaa6895-3600-4b96-912b-c9850c19d128] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [0aaa6895-3600-4b96-912b-c9850c19d128] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0050399s
addons_test.go:616: (dbg) Run:  kubectl --context addons-045400 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-045400 delete pod task-pv-pod-restore: (1.4461012s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-045400 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-045400 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable volumesnapshots --alsologtostderr -v=1: (1.2413082s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.6438813s)
--- PASS: TestAddons/parallel/CSI (51.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (36.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-045400 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-045400 --alsologtostderr -v=1: (1.7903611s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-zfqhg" [b1dafc2c-eb31-4cea-a4eb-1a5623bbbc12] Pending
helpers_test.go:353: "headlamp-6d8d595f-zfqhg" [b1dafc2c-eb31-4cea-a4eb-1a5623bbbc12] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-zfqhg" [b1dafc2c-eb31-4cea-a4eb-1a5623bbbc12] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-zfqhg" [b1dafc2c-eb31-4cea-a4eb-1a5623bbbc12] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 28.0051667s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable headlamp --alsologtostderr -v=1: (6.6544409s)
--- PASS: TestAddons/parallel/Headlamp (36.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-6fndt" [856cbd00-fba8-4c87-8b6e-4d5292c225e2] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0052915s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.99s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-045400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-045400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-045400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [9acaba27-0377-4998-b7b0-5d2b9fd7c15d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [9acaba27-0377-4998-b7b0-5d2b9fd7c15d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [9acaba27-0377-4998-b7b0-5d2b9fd7c15d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0055159s
addons_test.go:969: (dbg) Run:  kubectl --context addons-045400 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 ssh "cat /opt/local-path-provisioner/pvc-679233f3-b86d-4794-858e-4440ac18aeda_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-045400 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-045400 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.4893781s)
--- PASS: TestAddons/parallel/LocalPath (57.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-rl9dc" [230b531b-50a0-40ee-9c4f-bdee32993f47] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0088208s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-j97mc" [3aaf14c5-558a-4695-a4bc-2c4a140d95b8] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0413852s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable yakd --alsologtostderr -v=1: (6.8521958s)
--- PASS: TestAddons/parallel/Yakd (12.90s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-ppvx6" [d1aa3158-115f-4c30-839c-43f94298fbcc] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.1639417s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.6978221s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.91s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-045400
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-045400: (12.0634929s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-045400
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-045400
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-045400
--- PASS: TestAddons/StoppedEnableDisable (12.91s)

                                                
                                    
x
+
TestCertOptions (48.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-284100 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-284100 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (43.3814567s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-284100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1228 07:24:32.419522   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-284100
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-284100 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-284100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-284100
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-284100: (3.7267872s)
--- PASS: TestCertOptions (48.30s)

                                                
                                    
x
+
TestCertExpiration (262.35s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-709700 --memory=3072 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-709700 --memory=3072 --cert-expiration=3m --driver=docker: (45.5658918s)
E1228 07:23:10.180860   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-709700 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-709700 --memory=3072 --cert-expiration=8760h --driver=docker: (33.1007715s)
helpers_test.go:176: Cleaning up "cert-expiration-709700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-709700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-709700: (3.6864672s)
--- PASS: TestCertExpiration (262.35s)

                                                
                                    
x
+
TestDockerFlags (48.78s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-697200 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-697200 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (43.5801543s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-697200 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-697200 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-697200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-697200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-697200: (4.0217466s)
--- PASS: TestDockerFlags (48.78s)

                                                
                                    
x
+
TestErrorSpam/start (2.44s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 start --dry-run
--- PASS: TestErrorSpam/start (2.44s)

                                                
                                    
x
+
TestErrorSpam/status (2.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 status
--- PASS: TestErrorSpam/status (2.10s)

                                                
                                    
x
+
TestErrorSpam/pause (2.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 pause: (1.1522276s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 pause
--- PASS: TestErrorSpam/pause (2.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 unpause
--- PASS: TestErrorSpam/unpause (2.50s)

                                                
                                    
x
+
TestErrorSpam/stop (19.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 stop: (12.0038104s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 stop: (4.344532s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-057300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-057300 stop: (3.3037203s)
--- PASS: TestErrorSpam/stop (19.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\13556\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-561400 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1228 06:38:10.144599   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:10.150054   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:10.161027   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:10.181866   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:10.222498   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:10.303625   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:10.464403   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:10.784861   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:11.425996   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:12.707193   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:15.267625   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:20.388857   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:30.629962   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-561400 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m13.1487516s)
--- PASS: TestFunctional/serial/StartWithProxy (73.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1228 06:38:32.855018   13556 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-561400 --alsologtostderr -v=8
E1228 06:38:51.111737   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-561400 --alsologtostderr -v=8: (44.2157631s)
functional_test.go:678: soft start took 44.2168984s for "functional-561400" cluster.
I1228 06:39:17.072416   13556 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (44.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-561400 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 cache add registry.k8s.io/pause:3.1: (3.8109974s)
functional_test.go:1069: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 cache add registry.k8s.io/pause:3.3: (3.0451825s)
functional_test.go:1069: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 cache add registry.k8s.io/pause:latest: (3.0989365s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-561400 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1488840659\001
functional_test.go:1097: (dbg) Done: docker build -t minikube-local-cache-test:functional-561400 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1488840659\001: (1.3283071s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cache add minikube-local-cache-test:functional-561400
functional_test.go:1109: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 cache add minikube-local-cache-test:functional-561400: (2.5834492s)
functional_test.go:1114: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cache delete minikube-local-cache-test:functional-561400
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-561400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh sudo crictl images
E1228 06:39:32.072414   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-561400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (567.7104ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cache reload
functional_test.go:1178: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 cache reload: (2.7148462s)
functional_test.go:1183: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 kubectl -- --context functional-561400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.36s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.62s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-561400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.62s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-561400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-561400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.3771974s)
functional_test.go:776: restart took 47.377331s for "functional-561400" cluster.
I1228 06:40:27.710378   13556 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (47.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-561400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 logs
functional_test.go:1256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 logs: (1.8501501s)
--- PASS: TestFunctional/serial/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd240943944\001\logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd240943944\001\logs.txt: (1.8412387s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-561400 apply -f testdata\invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-561400
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-561400: exit status 115 (1.0449179s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31180 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-561400 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-561400 config get cpus: exit status 14 (187.873ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-561400 config get cpus: exit status 14 (160.37ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-561400 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-561400 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (593.8311ms)

                                                
                                                
-- stdout --
	* [functional-561400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:41:11.871463   10856 out.go:360] Setting OutFile to fd 544 ...
	I1228 06:41:11.918325   10856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:41:11.918325   10856 out.go:374] Setting ErrFile to fd 1988...
	I1228 06:41:11.918325   10856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:41:11.934378   10856 out.go:368] Setting JSON to false
	I1228 06:41:11.938250   10856 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4011,"bootTime":1766900060,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1228 06:41:11.938898   10856 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1228 06:41:11.958205   10856 out.go:179] * [functional-561400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	I1228 06:41:11.960977   10856 notify.go:221] Checking for updates...
	I1228 06:41:11.962843   10856 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 06:41:11.966108   10856 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:41:11.968422   10856 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1228 06:41:11.970278   10856 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:41:11.972332   10856 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:41:11.975497   10856 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:41:11.976845   10856 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:41:12.094098   10856 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1228 06:41:12.097110   10856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:41:12.346702   10856 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:88 SystemTime:2025-12-28 06:41:12.323028439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 06:41:12.350274   10856 out.go:179] * Using the docker driver based on existing profile
	I1228 06:41:12.352390   10856 start.go:309] selected driver: docker
	I1228 06:41:12.352390   10856 start.go:928] validating driver "docker" against &{Name:functional-561400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-561400 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:41:12.352532   10856 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:41:12.355094   10856 out.go:203] 
	W1228 06:41:12.356718   10856 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1228 06:41:12.358441   10856 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-561400 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-561400 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-561400 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (646.322ms)

                                                
                                                
-- stdout --
	* [functional-561400] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:41:13.374222    8220 out.go:360] Setting OutFile to fd 1504 ...
	I1228 06:41:13.422167    8220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:41:13.422206    8220 out.go:374] Setting ErrFile to fd 1932...
	I1228 06:41:13.422245    8220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:41:13.438240    8220 out.go:368] Setting JSON to false
	I1228 06:41:13.442746    8220 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4013,"bootTime":1766900060,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1228 06:41:13.442746    8220 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1228 06:41:13.446235    8220 out.go:179] * [functional-561400] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 22H2
	I1228 06:41:13.450803    8220 notify.go:221] Checking for updates...
	I1228 06:41:13.450803    8220 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1228 06:41:13.452973    8220 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:41:13.454928    8220 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1228 06:41:13.457785    8220 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:41:13.460468    8220 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:41:13.463021    8220 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:41:13.464037    8220 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:41:13.612017    8220 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1228 06:41:13.617598    8220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:41:13.870257    8220 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-28 06:41:13.848932685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1228 06:41:13.873831    8220 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1228 06:41:13.877994    8220 start.go:309] selected driver: docker
	I1228 06:41:13.878035    8220 start.go:928] validating driver "docker" against &{Name:functional-561400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-561400 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:41:13.878217    8220 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:41:13.881020    8220 out.go:203] 
	W1228 06:41:13.883060    8220 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1228 06:41:13.887500    8220 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [a7a5bbe5-2f4c-47c8-b958-5bc825b65bf4] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0057432s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-561400 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-561400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-561400 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-561400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a1f44006-1418-4794-91e9-9cf0876d24b3] Pending
helpers_test.go:353: "sp-pod" [a1f44006-1418-4794-91e9-9cf0876d24b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [a1f44006-1418-4794-91e9-9cf0876d24b3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.0058107s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-561400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-561400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-561400 delete -f testdata/storage-provisioner/pod.yaml: (1.4050087s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-561400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a90091c2-35c1-4651-9dcb-4027e534b3dd] Pending
helpers_test.go:353: "sp-pod" [a90091c2-35c1-4651-9dcb-4027e534b3dd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [a90091c2-35c1-4651-9dcb-4027e534b3dd] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0061415s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-561400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh -n functional-561400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cp functional-561400:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd2568942378\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh -n functional-561400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh -n functional-561400 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (83.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-561400 replace --force -f testdata\mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-5snxs" [1e7b4860-897f-42d7-a6ec-81408997a4d5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-5snxs" [1e7b4860-897f-42d7-a6ec-81408997a4d5] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m6.0159725s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;": exit status 1 (206.8353ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1228 06:42:08.621953   13556 retry.go:84] will retry after 1.3s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;": exit status 1 (195.3146ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;": exit status 1 (230.5499ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;": exit status 1 (218.2461ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;": exit status 1 (221.2458ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1228 06:42:18.575658   13556 retry.go:84] will retry after 6.3s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-561400 exec mysql-7d7b65bc95-5snxs -- mysql -ppassword -e "show databases;"
E1228 06:43:10.148798   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:43:37.836247   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (83.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/13556/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo cat /etc/test/nested/copy/13556/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/13556.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo cat /etc/ssl/certs/13556.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/13556.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo cat /usr/share/ca-certificates/13556.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/135562.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo cat /etc/ssl/certs/135562.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/135562.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo cat /usr/share/ca-certificates/135562.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E1228 06:40:53.993651   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/CertSync (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-561400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-561400 ssh "sudo systemctl is-active crio": exit status 1 (653.242ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2298: (dbg) Done: out/minikube-windows-amd64.exe license: (1.6791258s)
--- PASS: TestFunctional/parallel/License (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-561400 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-561400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-xwfj5" [c1f15aac-93f4-4d2b-a88e-45e0abf706e4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-xwfj5" [c1f15aac-93f4-4d2b-a88e-45e0abf706e4] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.0044805s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-561400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-561400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-561400 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 10960: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 9132: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-561400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-561400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-561400 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [aefdf82a-de40-409b-9f54-d1380e340d60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [aefdf82a-de40-409b-9f54-d1380e340d60] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.0083209s
I1228 06:41:01.102924   13556 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 service list -o json
functional_test.go:1509: Took "831.5646ms" to run "out/minikube-windows-amd64.exe -p functional-561400 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 service --namespace=default --https --url hello-node
functional_test.go:1524: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-561400 service --namespace=default --https --url hello-node: exit status 1 (15.0104036s)

                                                
                                                
-- stdout --
	https://127.0.0.1:52636

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1537: found endpoint: https://127.0.0.1:52636
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (5.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-561400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-561400"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-561400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-561400": (3.0083904s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-561400 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-561400 docker-env | Invoke-Expression ; docker images": (2.1856088s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-561400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-561400
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-561400 image ls --format short --alsologtostderr:
I1228 06:41:17.137742    6124 out.go:360] Setting OutFile to fd 1496 ...
I1228 06:41:17.181145    6124 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:17.181145    6124 out.go:374] Setting ErrFile to fd 1756...
I1228 06:41:17.181145    6124 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:17.198541    6124 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:17.198877    6124 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:17.205176    6124 cli_runner.go:164] Run: docker container inspect functional-561400 --format={{.State.Status}}
I1228 06:41:17.281509    6124 ssh_runner.go:195] Run: systemctl --version
I1228 06:41:17.283836    6124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-561400
I1228 06:41:17.336591    6124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52405 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-561400\id_rsa Username:docker}
I1228 06:41:17.456854    6124 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-561400 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 0a108f7189562 │ 62.5MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-561400 │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                             │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/minikube-local-cache-test       │ functional-561400 │ 3d3922a44a24f │ 30B    │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 2c9a4b058bd7e │ 75.8MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ 32652ff1bbe6b │ 70.7MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 04da2b0513cd7 │ 53.7MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ 5c6acd67e9cd1 │ 89.8MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ 550794e3b12ac │ 51.7MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/pause                             │ latest            │ 350b164e7ae1d │ 240kB  │
│ localhost/my-image                                │ functional-561400 │ 41e3d94e2e611 │ 1.24MB │
│ registry.k8s.io/pause                             │ 3.1               │ da86e6ba6ca19 │ 742kB  │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-561400 image ls --format table --alsologtostderr:
I1228 06:41:29.176977    1844 out.go:360] Setting OutFile to fd 2040 ...
I1228 06:41:29.223059    1844 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:29.223059    1844 out.go:374] Setting ErrFile to fd 2028...
I1228 06:41:29.223059    1844 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:29.238071    1844 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:29.239033    1844 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:29.249172    1844 cli_runner.go:164] Run: docker container inspect functional-561400 --format={{.State.Status}}
I1228 06:41:29.310852    1844 ssh_runner.go:195] Run: systemctl --version
I1228 06:41:29.314843    1844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-561400
I1228 06:41:29.369297    1844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52405 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-561400\id_rsa Username:docker}
I1228 06:41:29.556892    1844 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-561400 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"41e3d94e2e611c1d74a32a4329ae0d53b2a6fd69041916ee293b781e4e3d19f4","repoDigests":[],"repoTags":["localhost/my-image:functional-561400"],"size":"1240000"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"89800000"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3d3922a44a24f7b6f9ff147938553c647c79d9210216c0078775ef088660af2e","repoDigests":[]
,"repoTags":["docker.io/library/minikube-local-cache-test:functional-561400"],"size":"30"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"70700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4940000"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"51700000"},{"id":"6e38f40d628db3002f5617342c8872c9
35de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"75800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-561400 image ls --format json --alsologtostderr:
I1228 06:41:28.714496    6532 out.go:360] Setting OutFile to fd 1992 ...
I1228 06:41:28.758645    6532 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:28.758645    6532 out.go:374] Setting ErrFile to fd 1408...
I1228 06:41:28.758645    6532 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:28.771781    6532 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:28.771828    6532 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:28.778161    6532 cli_runner.go:164] Run: docker container inspect functional-561400 --format={{.State.Status}}
I1228 06:41:28.842106    6532 ssh_runner.go:195] Run: systemctl --version
I1228 06:41:28.845934    6532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-561400
I1228 06:41:28.902715    6532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52405 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-561400\id_rsa Username:docker}
I1228 06:41:29.043925    6532 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-561400 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3d3922a44a24f7b6f9ff147938553c647c79d9210216c0078775ef088660af2e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-561400
size: "30"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "51700000"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "75800000"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "70700000"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "62500000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "89800000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-561400 image ls --format yaml --alsologtostderr:
I1228 06:41:17.591468    8260 out.go:360] Setting OutFile to fd 1616 ...
I1228 06:41:17.633742    8260 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:17.633742    8260 out.go:374] Setting ErrFile to fd 1816...
I1228 06:41:17.633742    8260 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:17.645044    8260 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:17.645611    8260 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:17.651690    8260 cli_runner.go:164] Run: docker container inspect functional-561400 --format={{.State.Status}}
I1228 06:41:17.719628    8260 ssh_runner.go:195] Run: systemctl --version
I1228 06:41:17.723063    8260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-561400
I1228 06:41:17.776244    8260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52405 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-561400\id_rsa Username:docker}
I1228 06:41:17.897469    8260 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-561400 ssh pgrep buildkitd: exit status 1 (565.8296ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image build -t localhost/my-image:functional-561400 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 image build -t localhost/my-image:functional-561400 testdata\build --alsologtostderr: (9.6934027s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-561400 image build -t localhost/my-image:functional-561400 testdata\build --alsologtostderr:
I1228 06:41:18.610633    7196 out.go:360] Setting OutFile to fd 1868 ...
I1228 06:41:18.702725    7196 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:18.702725    7196 out.go:374] Setting ErrFile to fd 1400...
I1228 06:41:18.702725    7196 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:41:18.720712    7196 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:18.750708    7196 config.go:182] Loaded profile config "functional-561400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:41:18.760717    7196 cli_runner.go:164] Run: docker container inspect functional-561400 --format={{.State.Status}}
I1228 06:41:18.837707    7196 ssh_runner.go:195] Run: systemctl --version
I1228 06:41:18.840705    7196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-561400
I1228 06:41:18.997160    7196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52405 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-561400\id_rsa Username:docker}
I1228 06:41:19.162525    7196 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1704977889.tar
I1228 06:41:19.168515    7196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1228 06:41:19.192517    7196 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1704977889.tar
I1228 06:41:19.201538    7196 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1704977889.tar: stat -c "%s %y" /var/lib/minikube/build/build.1704977889.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1704977889.tar': No such file or directory
I1228 06:41:19.201538    7196 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1704977889.tar --> /var/lib/minikube/build/build.1704977889.tar (3072 bytes)
I1228 06:41:19.260514    7196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1704977889
I1228 06:41:19.283517    7196 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1704977889 -xf /var/lib/minikube/build/build.1704977889.tar
I1228 06:41:19.354852    7196 docker.go:364] Building image: /var/lib/minikube/build/build.1704977889
I1228 06:41:19.360867    7196 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-561400 /var/lib/minikube/build/build.1704977889
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 1.9s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 2.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 2.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 3.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 2.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 1.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:41e3d94e2e611c1d74a32a4329ae0d53b2a6fd69041916ee293b781e4e3d19f4
#8 writing image sha256:41e3d94e2e611c1d74a32a4329ae0d53b2a6fd69041916ee293b781e4e3d19f4 done
#8 naming to localhost/my-image:functional-561400 0.0s done
#8 DONE 0.2s
I1228 06:41:28.156063    7196 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-561400 /var/lib/minikube/build/build.1704977889: (8.7950915s)
I1228 06:41:28.160685    7196 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1704977889
I1228 06:41:28.180596    7196 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1704977889.tar
I1228 06:41:28.195761    7196 build_images.go:218] Built localhost/my-image:functional-561400 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1704977889.tar
I1228 06:41:28.195947    7196 build_images.go:134] succeeded building to: functional-561400
I1228 06:41:28.195947    7196 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0: (1.4757453s)
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-561400 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 --alsologtostderr: (2.9452328s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-561400 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 7956: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 9196: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 service hello-node --url --format={{.IP}}
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-561400 service hello-node --url --format={{.IP}}: exit status 1 (15.0132108s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 --alsologtostderr: (2.3934432s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-561400 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 --alsologtostderr: (2.6201367s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1335: Took "777.2358ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1349: Took "165.7212ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1386: Took "751.8381ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1399: Took "165.185ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-561400 service hello-node --url
functional_test.go:1574: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-561400 service hello-node --url: exit status 1 (15.009653s)

                                                
                                                
-- stdout --
	http://127.0.0.1:52756

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1580: found endpoint for hello-node: http://127.0.0.1:52756
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-561400
--- PASS: TestFunctional/delete_echo-server_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-561400
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-561400
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1228 06:48:10.152285   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (3m34.9042484s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5: (1.5948754s)
--- PASS: TestMultiControlPlane/serial/StartCluster (216.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 kubectl -- rollout status deployment/busybox: (4.4987968s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-55tdm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-q64lz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-wwcxd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-55tdm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-q64lz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-wwcxd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-55tdm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-q64lz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-wwcxd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-55tdm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-55tdm -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-q64lz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-q64lz -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-wwcxd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 kubectl -- exec busybox-769dd8b7dd-wwcxd -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 node add --alsologtostderr -v 5
E1228 06:50:36.952642   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:36.958638   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:36.969641   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:36.990653   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:37.031363   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:37.111727   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:37.272021   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:37.592339   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:38.233482   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:39.515041   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:42.076064   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:47.197204   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:50:57.438346   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 node add --alsologtostderr -v 5: (52.0355509s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5: (1.8960347s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-293400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9378393s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (32.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 status --output json --alsologtostderr -v 5: (1.8896563s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp testdata\cp-test.txt ha-293400:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3318610180\001\cp-test_ha-293400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400:/home/docker/cp-test.txt ha-293400-m02:/home/docker/cp-test_ha-293400_ha-293400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m02 "sudo cat /home/docker/cp-test_ha-293400_ha-293400-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400:/home/docker/cp-test.txt ha-293400-m03:/home/docker/cp-test_ha-293400_ha-293400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m03 "sudo cat /home/docker/cp-test_ha-293400_ha-293400-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400:/home/docker/cp-test.txt ha-293400-m04:/home/docker/cp-test_ha-293400_ha-293400-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m04 "sudo cat /home/docker/cp-test_ha-293400_ha-293400-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp testdata\cp-test.txt ha-293400-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3318610180\001\cp-test_ha-293400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m02:/home/docker/cp-test.txt ha-293400:/home/docker/cp-test_ha-293400-m02_ha-293400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400 "sudo cat /home/docker/cp-test_ha-293400-m02_ha-293400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m02:/home/docker/cp-test.txt ha-293400-m03:/home/docker/cp-test_ha-293400-m02_ha-293400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m02 "sudo cat /home/docker/cp-test.txt"
E1228 06:51:17.919563   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m03 "sudo cat /home/docker/cp-test_ha-293400-m02_ha-293400-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m02:/home/docker/cp-test.txt ha-293400-m04:/home/docker/cp-test_ha-293400-m02_ha-293400-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m04 "sudo cat /home/docker/cp-test_ha-293400-m02_ha-293400-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp testdata\cp-test.txt ha-293400-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3318610180\001\cp-test_ha-293400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m03:/home/docker/cp-test.txt ha-293400:/home/docker/cp-test_ha-293400-m03_ha-293400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400 "sudo cat /home/docker/cp-test_ha-293400-m03_ha-293400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m03:/home/docker/cp-test.txt ha-293400-m02:/home/docker/cp-test_ha-293400-m03_ha-293400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m02 "sudo cat /home/docker/cp-test_ha-293400-m03_ha-293400-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m03:/home/docker/cp-test.txt ha-293400-m04:/home/docker/cp-test_ha-293400-m03_ha-293400-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m04 "sudo cat /home/docker/cp-test_ha-293400-m03_ha-293400-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp testdata\cp-test.txt ha-293400-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3318610180\001\cp-test_ha-293400-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m04:/home/docker/cp-test.txt ha-293400:/home/docker/cp-test_ha-293400-m04_ha-293400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400 "sudo cat /home/docker/cp-test_ha-293400-m04_ha-293400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m04:/home/docker/cp-test.txt ha-293400-m02:/home/docker/cp-test_ha-293400-m04_ha-293400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m02 "sudo cat /home/docker/cp-test_ha-293400-m04_ha-293400-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 cp ha-293400-m04:/home/docker/cp-test.txt ha-293400-m03:/home/docker/cp-test_ha-293400-m04_ha-293400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 ssh -n ha-293400-m03 "sudo cat /home/docker/cp-test_ha-293400-m04_ha-293400-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (32.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 node stop m02 --alsologtostderr -v 5: (11.9568321s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5: exit status 7 (1.4886207s)

                                                
                                                
-- stdout --
	ha-293400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-293400-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293400-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:51:48.262664    9508 out.go:360] Setting OutFile to fd 1872 ...
	I1228 06:51:48.304595    9508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:48.304595    9508 out.go:374] Setting ErrFile to fd 1528...
	I1228 06:51:48.304595    9508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:48.314574    9508 out.go:368] Setting JSON to false
	I1228 06:51:48.314574    9508 mustload.go:66] Loading cluster: ha-293400
	I1228 06:51:48.314574    9508 notify.go:221] Checking for updates...
	I1228 06:51:48.315568    9508 config.go:182] Loaded profile config "ha-293400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:51:48.315568    9508 status.go:174] checking status of ha-293400 ...
	I1228 06:51:48.323204    9508 cli_runner.go:164] Run: docker container inspect ha-293400 --format={{.State.Status}}
	I1228 06:51:48.378117    9508 status.go:371] ha-293400 host status = "Running" (err=<nil>)
	I1228 06:51:48.378117    9508 host.go:66] Checking if "ha-293400" exists ...
	I1228 06:51:48.382652    9508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-293400
	I1228 06:51:48.438132    9508 host.go:66] Checking if "ha-293400" exists ...
	I1228 06:51:48.443232    9508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:51:48.446271    9508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-293400
	I1228 06:51:48.498029    9508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52807 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-293400\id_rsa Username:docker}
	I1228 06:51:48.620511    9508 ssh_runner.go:195] Run: systemctl --version
	I1228 06:51:48.637448    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:51:48.662931    9508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-293400
	I1228 06:51:48.717490    9508 kubeconfig.go:125] found "ha-293400" server: "https://127.0.0.1:52811"
	I1228 06:51:48.718285    9508 api_server.go:166] Checking apiserver status ...
	I1228 06:51:48.725424    9508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:51:48.749573    9508 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2406/cgroup
	I1228 06:51:48.762766    9508 api_server.go:192] apiserver freezer: "7:freezer:/docker/175ea81d10138dc867905d6891994d9871eac7b196d9df1e9961f075551ee75d/kubepods/burstable/pod4972a80191677fe461ba201a104bb988/deb916777ac2d63b9673efaf54fb0281971c68b6e9631253539d4e9653053f26"
	I1228 06:51:48.766679    9508 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/175ea81d10138dc867905d6891994d9871eac7b196d9df1e9961f075551ee75d/kubepods/burstable/pod4972a80191677fe461ba201a104bb988/deb916777ac2d63b9673efaf54fb0281971c68b6e9631253539d4e9653053f26/freezer.state
	I1228 06:51:48.780605    9508 api_server.go:214] freezer state: "THAWED"
	I1228 06:51:48.780605    9508 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:52811/healthz ...
	I1228 06:51:48.789097    9508 api_server.go:325] https://127.0.0.1:52811/healthz returned 200:
	ok
	I1228 06:51:48.789122    9508 status.go:463] ha-293400 apiserver status = Running (err=<nil>)
	I1228 06:51:48.789122    9508 status.go:176] ha-293400 status: &{Name:ha-293400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:51:48.789122    9508 status.go:174] checking status of ha-293400-m02 ...
	I1228 06:51:48.796217    9508 cli_runner.go:164] Run: docker container inspect ha-293400-m02 --format={{.State.Status}}
	I1228 06:51:48.850904    9508 status.go:371] ha-293400-m02 host status = "Stopped" (err=<nil>)
	I1228 06:51:48.850904    9508 status.go:384] host is not running, skipping remaining checks
	I1228 06:51:48.850904    9508 status.go:176] ha-293400-m02 status: &{Name:ha-293400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:51:48.850904    9508 status.go:174] checking status of ha-293400-m03 ...
	I1228 06:51:48.858405    9508 cli_runner.go:164] Run: docker container inspect ha-293400-m03 --format={{.State.Status}}
	I1228 06:51:48.913181    9508 status.go:371] ha-293400-m03 host status = "Running" (err=<nil>)
	I1228 06:51:48.913181    9508 host.go:66] Checking if "ha-293400-m03" exists ...
	I1228 06:51:48.917714    9508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-293400-m03
	I1228 06:51:48.970442    9508 host.go:66] Checking if "ha-293400-m03" exists ...
	I1228 06:51:48.975388    9508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:51:48.978352    9508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-293400-m03
	I1228 06:51:49.088574    9508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52927 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-293400-m03\id_rsa Username:docker}
	I1228 06:51:49.212184    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:51:49.236099    9508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-293400
	I1228 06:51:49.286060    9508 kubeconfig.go:125] found "ha-293400" server: "https://127.0.0.1:52811"
	I1228 06:51:49.286060    9508 api_server.go:166] Checking apiserver status ...
	I1228 06:51:49.289069    9508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:51:49.310074    9508 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2318/cgroup
	I1228 06:51:49.323062    9508 api_server.go:192] apiserver freezer: "7:freezer:/docker/4efb2a9207e45e645fde2a6b12e4088ef76a29b8001243b8a6b46672e7053567/kubepods/burstable/pod8d71f01d68da8ca57889e589cda91f39/bf9865b1d8e32a586295f37616a56d64ebc12845566667d7402e2edfc4a419fd"
	I1228 06:51:49.326067    9508 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4efb2a9207e45e645fde2a6b12e4088ef76a29b8001243b8a6b46672e7053567/kubepods/burstable/pod8d71f01d68da8ca57889e589cda91f39/bf9865b1d8e32a586295f37616a56d64ebc12845566667d7402e2edfc4a419fd/freezer.state
	I1228 06:51:49.338076    9508 api_server.go:214] freezer state: "THAWED"
	I1228 06:51:49.338076    9508 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:52811/healthz ...
	I1228 06:51:49.346927    9508 api_server.go:325] https://127.0.0.1:52811/healthz returned 200:
	ok
	I1228 06:51:49.346927    9508 status.go:463] ha-293400-m03 apiserver status = Running (err=<nil>)
	I1228 06:51:49.346927    9508 status.go:176] ha-293400-m03 status: &{Name:ha-293400-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:51:49.346927    9508 status.go:174] checking status of ha-293400-m04 ...
	I1228 06:51:49.353758    9508 cli_runner.go:164] Run: docker container inspect ha-293400-m04 --format={{.State.Status}}
	I1228 06:51:49.406440    9508 status.go:371] ha-293400-m04 host status = "Running" (err=<nil>)
	I1228 06:51:49.406440    9508 host.go:66] Checking if "ha-293400-m04" exists ...
	I1228 06:51:49.411047    9508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-293400-m04
	I1228 06:51:49.465335    9508 host.go:66] Checking if "ha-293400-m04" exists ...
	I1228 06:51:49.470231    9508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:51:49.473077    9508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-293400-m04
	I1228 06:51:49.526500    9508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53062 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-293400-m04\id_rsa Username:docker}
	I1228 06:51:49.638039    9508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:51:49.657476    9508 status.go:176] ha-293400-m04 status: &{Name:ha-293400-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5392481s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 node start m02 --alsologtostderr -v 5
E1228 06:51:58.880517   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 node start m02 --alsologtostderr -v 5: (47.850537s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5: (1.9257821s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9502874s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (201.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 stop --alsologtostderr -v 5
E1228 06:53:10.156824   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:53:20.802134   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 stop --alsologtostderr -v 5: (37.6110469s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 start --wait true --alsologtostderr -v 5
E1228 06:54:33.204811   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:55:36.957629   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 start --wait true --alsologtostderr -v 5: (2m43.6212067s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (201.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 node delete m03 --alsologtostderr -v 5
E1228 06:56:04.645255   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 node delete m03 --alsologtostderr -v 5: (12.3313386s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5: (1.4042531s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5276003s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 stop --alsologtostderr -v 5: (37.2010047s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5: exit status 7 (323.9356ms)

                                                
                                                
-- stdout --
	ha-293400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-293400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-293400-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:56:57.631318   13692 out.go:360] Setting OutFile to fd 1528 ...
	I1228 06:56:57.674364   13692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:57.674364   13692 out.go:374] Setting ErrFile to fd 1668...
	I1228 06:56:57.674364   13692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:57.683908   13692 out.go:368] Setting JSON to false
	I1228 06:56:57.683908   13692 mustload.go:66] Loading cluster: ha-293400
	I1228 06:56:57.684910   13692 notify.go:221] Checking for updates...
	I1228 06:56:57.684910   13692 config.go:182] Loaded profile config "ha-293400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:56:57.684910   13692 status.go:174] checking status of ha-293400 ...
	I1228 06:56:57.692541   13692 cli_runner.go:164] Run: docker container inspect ha-293400 --format={{.State.Status}}
	I1228 06:56:57.750959   13692 status.go:371] ha-293400 host status = "Stopped" (err=<nil>)
	I1228 06:56:57.750959   13692 status.go:384] host is not running, skipping remaining checks
	I1228 06:56:57.750959   13692 status.go:176] ha-293400 status: &{Name:ha-293400 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:56:57.750959   13692 status.go:174] checking status of ha-293400-m02 ...
	I1228 06:56:57.756959   13692 cli_runner.go:164] Run: docker container inspect ha-293400-m02 --format={{.State.Status}}
	I1228 06:56:57.804960   13692 status.go:371] ha-293400-m02 host status = "Stopped" (err=<nil>)
	I1228 06:56:57.804960   13692 status.go:384] host is not running, skipping remaining checks
	I1228 06:56:57.804960   13692 status.go:176] ha-293400-m02 status: &{Name:ha-293400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:56:57.804960   13692 status.go:174] checking status of ha-293400-m04 ...
	I1228 06:56:57.811960   13692 cli_runner.go:164] Run: docker container inspect ha-293400-m04 --format={{.State.Status}}
	I1228 06:56:57.861961   13692 status.go:371] ha-293400-m04 host status = "Stopped" (err=<nil>)
	I1228 06:56:57.861961   13692 status.go:384] host is not running, skipping remaining checks
	I1228 06:56:57.861961   13692 status.go:176] ha-293400-m04 status: &{Name:ha-293400-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (72.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 start --wait true --alsologtostderr -v 5 --driver=docker
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 start --wait true --alsologtostderr -v 5 --driver=docker: (1m11.174089s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5
E1228 06:58:10.160344   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5: (1.3969516s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (72.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5212758s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (98.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 node add --control-plane --alsologtostderr -v 5: (1m36.7722088s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-293400 status --alsologtostderr -v 5: (1.9150793s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (98.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9823864s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.98s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (47.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-224400 --driver=docker
E1228 07:00:36.961386   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-224400 --driver=docker: (47.404702s)
--- PASS: TestImageBuild/serial/Setup (47.40s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-224400
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-224400: (4.4365912s)
--- PASS: TestImageBuild/serial/NormalBuild (4.44s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-224400
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-224400: (2.1764343s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-224400
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-224400: (1.2560296s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.26s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-224400
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-224400: (1.2870174s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.29s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-253700 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-253700 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m25.9562992s)
--- PASS: TestJSONOutput/start/Command (85.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.13s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-253700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-253700 --output=json --user=testUser: (1.1300825s)
--- PASS: TestJSONOutput/pause/Command (1.13s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-253700 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.88s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-253700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-253700 --output=json --user=testUser: (12.3154959s)
--- PASS: TestJSONOutput/stop/Command (12.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.64s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-063500 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-063500 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (194.5226ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dd55aff3-9cda-4f39-95d7-578106da6849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-063500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"adf5dac3-7d7a-46f2-bcd6-02b617fa08d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"7464976f-446d-42d5-a447-6a61fdc04560","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1bbb133c-c6e6-4f0f-9df4-757c9a989e0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"bdf29960-9861-4062-96fd-4eea6ac17a59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"83c699fc-f029-4cca-8f27-6c8e9bbac06e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"863d6983-36c4-4eb8-80d0-1525cd12147d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-063500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-063500
--- PASS: TestErrorJSONOutput (0.64s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (51.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-601200 --network=
E1228 07:03:10.163657   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-601200 --network=: (47.5132644s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-601200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-601200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-601200: (3.4930626s)
--- PASS: TestKicCustomNetwork/create_custom_network (51.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (50.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-392600 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-392600 --network=bridge: (47.1757148s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-392600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-392600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-392600: (3.1826941s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (50.42s)

                                                
                                    
x
+
TestKicExistingNetwork (49.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1228 07:04:32.066615   13556 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 07:04:32.122750   13556 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 07:04:32.126802   13556 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1228 07:04:32.126844   13556 cli_runner.go:164] Run: docker network inspect existing-network
W1228 07:04:32.183202   13556 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1228 07:04:32.183313   13556 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1228 07:04:32.183361   13556 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1228 07:04:32.186652   13556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 07:04:32.261401   13556 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a227e0}
I1228 07:04:32.261431   13556 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1228 07:04:32.265930   13556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1228 07:04:32.323928   13556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1228 07:04:32.323928   13556 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1228 07:04:32.323928   13556 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1228 07:04:32.339930   13556 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1228 07:04:32.353033   13556 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00083d3e0}
I1228 07:04:32.353091   13556 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1228 07:04:32.356864   13556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1228 07:04:32.526776   13556 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-804300 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-804300 --network=existing-network: (46.0241138s)
helpers_test.go:176: Cleaning up "existing-network-804300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-804300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-804300: (3.114509s)
I1228 07:05:21.735089   13556 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (49.73s)

                                                
                                    
x
+
TestKicCustomSubnet (50.55s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-131800 --subnet=192.168.60.0/24
E1228 07:05:36.964703   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-131800 --subnet=192.168.60.0/24: (46.9332603s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-131800 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-131800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-131800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-131800: (3.5553405s)
--- PASS: TestKicCustomSubnet (50.55s)

                                                
                                    
x
+
TestKicStaticIP (50.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-304600 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-304600 --static-ip=192.168.200.200: (46.6810336s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-304600 ip
helpers_test.go:176: Cleaning up "static-ip-304600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-304600
E1228 07:07:00.014836   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-304600: (3.4978141s)
--- PASS: TestKicStaticIP (50.48s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (93.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-951200 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-951200 --driver=docker: (42.3373731s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-951200 --driver=docker
E1228 07:08:10.168122   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-951200 --driver=docker: (40.8963523s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-951200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1389123s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-951200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1651911s)
helpers_test.go:176: Cleaning up "second-951200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-951200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-951200: (3.8939474s)
helpers_test.go:176: Cleaning up "first-951200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-951200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-951200: (3.6477303s)
--- PASS: TestMinikubeProfile (93.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (13.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-386700 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3690077986\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-386700 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3690077986\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (12.7066163s)
--- PASS: TestMountStart/serial/StartWithMountFirst (13.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-386700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (13.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-386700 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3690077986\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-386700 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3690077986\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (12.4578835s)
--- PASS: TestMountStart/serial/StartWithMountSecond (13.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-386700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-386700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-386700 --alsologtostderr -v=5: (2.4530618s)
--- PASS: TestMountStart/serial/DeleteFirst (2.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.53s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-386700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.53s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-386700
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-386700: (1.8664449s)
--- PASS: TestMountStart/serial/Stop (1.87s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.56s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-386700
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-386700: (9.5568294s)
--- PASS: TestMountStart/serial/RestartStopped (10.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.52s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-386700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.52s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-297500 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1228 07:10:36.968846   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:11:13.218383   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-297500 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m4.2282567s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- rollout status deployment/busybox: (3.4282012s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-fpdj6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-twgd9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-fpdj6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-twgd9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-fpdj6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-twgd9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-fpdj6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-fpdj6 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-twgd9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-297500 -- exec busybox-769dd8b7dd-twgd9 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-297500 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-297500 -v=5 --alsologtostderr: (52.1775466s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr: (1.2920416s)
--- PASS: TestMultiNode/serial/AddNode (53.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-297500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.32013s)
--- PASS: TestMultiNode/serial/ProfileList (1.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (18.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-297500 status --output json --alsologtostderr: (1.303807s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp testdata\cp-test.txt multinode-297500:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile254016623\001\cp-test_multinode-297500.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500:/home/docker/cp-test.txt multinode-297500-m02:/home/docker/cp-test_multinode-297500_multinode-297500-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m02 "sudo cat /home/docker/cp-test_multinode-297500_multinode-297500-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500:/home/docker/cp-test.txt multinode-297500-m03:/home/docker/cp-test_multinode-297500_multinode-297500-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m03 "sudo cat /home/docker/cp-test_multinode-297500_multinode-297500-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp testdata\cp-test.txt multinode-297500-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile254016623\001\cp-test_multinode-297500-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500-m02:/home/docker/cp-test.txt multinode-297500:/home/docker/cp-test_multinode-297500-m02_multinode-297500.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500 "sudo cat /home/docker/cp-test_multinode-297500-m02_multinode-297500.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500-m02:/home/docker/cp-test.txt multinode-297500-m03:/home/docker/cp-test_multinode-297500-m02_multinode-297500-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m03 "sudo cat /home/docker/cp-test_multinode-297500-m02_multinode-297500-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp testdata\cp-test.txt multinode-297500-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile254016623\001\cp-test_multinode-297500-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500-m03:/home/docker/cp-test.txt multinode-297500:/home/docker/cp-test_multinode-297500-m03_multinode-297500.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500 "sudo cat /home/docker/cp-test_multinode-297500-m03_multinode-297500.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 cp multinode-297500-m03:/home/docker/cp-test.txt multinode-297500-m02:/home/docker/cp-test_multinode-297500-m03_multinode-297500-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 ssh -n multinode-297500-m02 "sudo cat /home/docker/cp-test_multinode-297500-m03_multinode-297500-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (18.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-297500 node stop m03: (1.7541767s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-297500 status: exit status 7 (1.0564371s)

                                                
                                                
-- stdout --
	multinode-297500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-297500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-297500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr: exit status 7 (980.99ms)

                                                
                                                
-- stdout --
	multinode-297500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-297500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-297500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:12:54.190770    7108 out.go:360] Setting OutFile to fd 704 ...
	I1228 07:12:54.233299    7108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:12:54.233299    7108 out.go:374] Setting ErrFile to fd 1864...
	I1228 07:12:54.233299    7108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:12:54.246700    7108 out.go:368] Setting JSON to false
	I1228 07:12:54.246700    7108 mustload.go:66] Loading cluster: multinode-297500
	I1228 07:12:54.246700    7108 notify.go:221] Checking for updates...
	I1228 07:12:54.247313    7108 config.go:182] Loaded profile config "multinode-297500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:12:54.247313    7108 status.go:174] checking status of multinode-297500 ...
	I1228 07:12:54.255502    7108 cli_runner.go:164] Run: docker container inspect multinode-297500 --format={{.State.Status}}
	I1228 07:12:54.309099    7108 status.go:371] multinode-297500 host status = "Running" (err=<nil>)
	I1228 07:12:54.309158    7108 host.go:66] Checking if "multinode-297500" exists ...
	I1228 07:12:54.312561    7108 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-297500
	I1228 07:12:54.360683    7108 host.go:66] Checking if "multinode-297500" exists ...
	I1228 07:12:54.365678    7108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:12:54.368678    7108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-297500
	I1228 07:12:54.420679    7108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-297500\id_rsa Username:docker}
	I1228 07:12:54.534315    7108 ssh_runner.go:195] Run: systemctl --version
	I1228 07:12:54.547818    7108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:12:54.572421    7108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-297500
	I1228 07:12:54.628770    7108 kubeconfig.go:125] found "multinode-297500" server: "https://127.0.0.1:54230"
	I1228 07:12:54.628770    7108 api_server.go:166] Checking apiserver status ...
	I1228 07:12:54.633906    7108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:12:54.659793    7108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2201/cgroup
	I1228 07:12:54.672955    7108 api_server.go:192] apiserver freezer: "7:freezer:/docker/acb0b71a7f7055320d1e08edb8e863f491364247674d01d7aae29269e4cdf992/kubepods/burstable/pode90b2492208b6f0154554dfa34bc2ecd/4c47d789a5216ca302026a7d53659821af4bbff74c743aea9a34f9a58b11a2ae"
	I1228 07:12:54.677439    7108 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/acb0b71a7f7055320d1e08edb8e863f491364247674d01d7aae29269e4cdf992/kubepods/burstable/pode90b2492208b6f0154554dfa34bc2ecd/4c47d789a5216ca302026a7d53659821af4bbff74c743aea9a34f9a58b11a2ae/freezer.state
	I1228 07:12:54.691018    7108 api_server.go:214] freezer state: "THAWED"
	I1228 07:12:54.691018    7108 api_server.go:299] Checking apiserver healthz at https://127.0.0.1:54230/healthz ...
	I1228 07:12:54.699924    7108 api_server.go:325] https://127.0.0.1:54230/healthz returned 200:
	ok
	I1228 07:12:54.700036    7108 status.go:463] multinode-297500 apiserver status = Running (err=<nil>)
	I1228 07:12:54.700036    7108 status.go:176] multinode-297500 status: &{Name:multinode-297500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 07:12:54.700036    7108 status.go:174] checking status of multinode-297500-m02 ...
	I1228 07:12:54.706703    7108 cli_runner.go:164] Run: docker container inspect multinode-297500-m02 --format={{.State.Status}}
	I1228 07:12:54.761578    7108 status.go:371] multinode-297500-m02 host status = "Running" (err=<nil>)
	I1228 07:12:54.761578    7108 host.go:66] Checking if "multinode-297500-m02" exists ...
	I1228 07:12:54.764587    7108 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-297500-m02
	I1228 07:12:54.814570    7108 host.go:66] Checking if "multinode-297500-m02" exists ...
	I1228 07:12:54.819571    7108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:12:54.821582    7108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-297500-m02
	I1228 07:12:54.871575    7108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54278 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-297500-m02\id_rsa Username:docker}
	I1228 07:12:54.985086    7108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:12:55.005799    7108 status.go:176] multinode-297500-m02 status: &{Name:multinode-297500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1228 07:12:55.005799    7108 status.go:174] checking status of multinode-297500-m03 ...
	I1228 07:12:55.012820    7108 cli_runner.go:164] Run: docker container inspect multinode-297500-m03 --format={{.State.Status}}
	I1228 07:12:55.077995    7108 status.go:371] multinode-297500-m03 host status = "Stopped" (err=<nil>)
	I1228 07:12:55.077995    7108 status.go:384] host is not running, skipping remaining checks
	I1228 07:12:55.078039    7108 status.go:176] multinode-297500-m03 status: &{Name:multinode-297500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.79s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-297500 node start m03 -v=5 --alsologtostderr: (11.6399033s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-297500 status -v=5 --alsologtostderr: (1.2981677s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-297500
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-297500
E1228 07:13:10.172427   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-297500: (24.7827439s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-297500 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-297500 --wait=true -v=5 --alsologtostderr: (54.5740208s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-297500
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-297500 node delete m03: (6.7880178s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-297500 stop: (23.3467097s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-297500 status: exit status 7 (272.7668ms)

                                                
                                                
-- stdout --
	multinode-297500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-297500-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr: exit status 7 (275.4478ms)

                                                
                                                
-- stdout --
	multinode-297500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-297500-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:14:59.611622    5816 out.go:360] Setting OutFile to fd 1084 ...
	I1228 07:14:59.656788    5816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:14:59.656864    5816 out.go:374] Setting ErrFile to fd 1184...
	I1228 07:14:59.656864    5816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:14:59.673395    5816 out.go:368] Setting JSON to false
	I1228 07:14:59.673395    5816 mustload.go:66] Loading cluster: multinode-297500
	I1228 07:14:59.673395    5816 notify.go:221] Checking for updates...
	I1228 07:14:59.674577    5816 config.go:182] Loaded profile config "multinode-297500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:14:59.674603    5816 status.go:174] checking status of multinode-297500 ...
	I1228 07:14:59.682049    5816 cli_runner.go:164] Run: docker container inspect multinode-297500 --format={{.State.Status}}
	I1228 07:14:59.734495    5816 status.go:371] multinode-297500 host status = "Stopped" (err=<nil>)
	I1228 07:14:59.734495    5816 status.go:384] host is not running, skipping remaining checks
	I1228 07:14:59.734495    5816 status.go:176] multinode-297500 status: &{Name:multinode-297500 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 07:14:59.734495    5816 status.go:174] checking status of multinode-297500-m02 ...
	I1228 07:14:59.740494    5816 cli_runner.go:164] Run: docker container inspect multinode-297500-m02 --format={{.State.Status}}
	I1228 07:14:59.794856    5816 status.go:371] multinode-297500-m02 host status = "Stopped" (err=<nil>)
	I1228 07:14:59.794856    5816 status.go:384] host is not running, skipping remaining checks
	I1228 07:14:59.794856    5816 status.go:176] multinode-297500-m02 status: &{Name:multinode-297500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-297500 --wait=true -v=5 --alsologtostderr --driver=docker
E1228 07:15:36.973351   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-297500 --wait=true -v=5 --alsologtostderr --driver=docker: (58.7468773s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-297500 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-297500
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-297500-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-297500-m02 --driver=docker: exit status 14 (195.4791ms)

                                                
                                                
-- stdout --
	* [multinode-297500-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-297500-m02' is duplicated with machine name 'multinode-297500-m02' in profile 'multinode-297500'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-297500-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-297500-m03 --driver=docker: (42.1426643s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-297500
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-297500: exit status 80 (640.4677ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-297500 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-297500-m03 already exists in multinode-297500-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_27.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-297500-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-297500-m03: (3.6916506s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.82s)

                                                
                                    
x
+
TestScheduledStopWindows (107.91s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-511200 --memory=3072 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-511200 --memory=3072 --driver=docker: (41.7061475s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-511200 --schedule 5m
minikube stop output:

                                                
                                                
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-511200 -n scheduled-stop-511200
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-511200 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-511200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-511200 --schedule 5s: (1.0460131s)
minikube stop output:

                                                
                                                
E1228 07:18:10.177316   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-511200
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-511200: exit status 7 (220.6616ms)

                                                
                                                
-- stdout --
	scheduled-stop-511200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-511200 -n scheduled-stop-511200
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-511200 -n scheduled-stop-511200: exit status 7 (215.0183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-511200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-511200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-511200: (2.4621414s)
--- PASS: TestScheduledStopWindows (107.91s)

                                                
                                    
x
+
TestInsufficientStorage (28.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-135100 --memory=3072 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-135100 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (25.200692s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d2f0a93-6cda-4807-be5f-196c0f9b0beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-135100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52f8163f-cc32-4357-a6d2-c482184b2592","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"9ce2d1aa-7bea-4820-86eb-f18cf9c601f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3794cbb8-9119-478e-8e1a-d0a0d9b5ed20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"fed9686a-3ab1-47e8-a84e-36f6b82e58da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"66d23eb6-7a8b-46ff-af58-4c417d9a921d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2356c5d-fec9-441f-952f-6a714caddd97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b1291472-8af8-457c-96e2-f2766e0fca39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cdd08b2e-7010-49a0-8b6c-e6121fdc8df3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c27cf12-81fa-4b32-9420-be844c9f5e79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"e57c5708-d5ae-49ec-9753-22b2ff236c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-135100\" primary control-plane node in \"insufficient-storage-135100\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"147b0e9e-b9c1-480c-9d58-bed59a679a0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766884053-22351 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1fdd2459-eec5-4ae7-bc7b-3a9b23661f93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a512d008-d7fe-4897-b62f-01dd2d425f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-135100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-135100 --output=json --layout=cluster: exit status 7 (585.7307ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-135100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-135100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:19:06.620536   12812 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-135100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-135100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-135100 --output=json --layout=cluster: exit status 7 (575.3589ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-135100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-135100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:19:07.198219      32 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-135100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1228 07:19:07.219252      32 status.go:258] unable to read event log: stat: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-135100\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-135100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-135100
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-135100: (2.6230256s)
--- PASS: TestInsufficientStorage (28.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (351.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.254432120.exe start -p running-upgrade-509300 --memory=3072 --vm-driver=docker
E1228 07:27:53.233997   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:10.185791   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.254432120.exe start -p running-upgrade-509300 --memory=3072 --vm-driver=docker: (49.0822461s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-509300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-509300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (4m57.6440382s)
helpers_test.go:176: Cleaning up "running-upgrade-509300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-509300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-509300: (3.6073965s)
--- PASS: TestRunningBinaryUpgrade (351.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (394.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-365300 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-365300 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (48.6324029s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-365300 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-365300 --alsologtostderr: (12.098136s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-365300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-365300 status --format={{.Host}}: exit status 7 (217.4508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-365300 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker
E1228 07:30:36.987088   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-365300 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker: (4m44.8115008s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-365300 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-365300 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-365300 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker: exit status 106 (262.9539ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-365300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-365300
	    minikube start -p kubernetes-upgrade-365300 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3653002 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-365300 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-365300 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-365300 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker: (43.382235s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-365300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-365300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-365300: (4.7939893s)
--- PASS: TestKubernetesUpgrade (394.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3602555782.exe start -p missing-upgrade-224300 --memory=3072 --driver=docker
E1228 07:25:36.981939   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3602555782.exe start -p missing-upgrade-224300 --memory=3072 --driver=docker: (49.9149989s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-224300
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-224300: (2.1236993s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-224300
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-224300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-224300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m13.5155598s)
helpers_test.go:176: Cleaning up "missing-upgrade-224300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-224300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-224300: (3.6701895s)
--- PASS: TestMissingContainerUpgrade (130.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestPause/serial/Start (124.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-505000 --memory=3072 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-505000 --memory=3072 --install-addons=false --wait=all --driver=docker: (2m4.4761297s)
--- PASS: TestPause/serial/Start (124.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (378.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.737371238.exe start -p stopped-upgrade-550200 --memory=3072 --vm-driver=docker
E1228 07:20:36.977763   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.737371238.exe start -p stopped-upgrade-550200 --memory=3072 --vm-driver=docker: (1m44.2188096s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.737371238.exe -p stopped-upgrade-550200 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.737371238.exe -p stopped-upgrade-550200 stop: (2.1067924s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-550200 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-550200 --memory=3072 --alsologtostderr -v=1 --driver=docker: (4m31.8086632s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (378.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-505000 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-505000 --alsologtostderr -v=1 --driver=docker: (45.4652895s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.48s)

                                                
                                    
x
+
TestPause/serial/Pause (1.13s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-505000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-505000 --alsologtostderr -v=5: (1.1298828s)
--- PASS: TestPause/serial/Pause (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-505000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-505000 --output=json --layout=cluster: exit status 2 (637.6204ms)

                                                
                                                
-- stdout --
	{"Name":"pause-505000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-505000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.64s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-505000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.4s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-505000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-505000 --alsologtostderr -v=5: (1.403192s)
--- PASS: TestPause/serial/PauseAgain (1.40s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-505000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-505000 --alsologtostderr -v=5: (4.1777741s)
--- PASS: TestPause/serial/DeletePaused (4.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (215.183ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-986100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --memory=3072 --alsologtostderr -v=5 --driver=docker: (47.3128236s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-986100 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.1137719s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-505000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-505000: exit status 1 (48.9959ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-505000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (17.1800301s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-986100 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-986100 status -o json: exit status 2 (567.962ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-986100","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-986100
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-986100: (2.7760064s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (13.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (13.9871839s)
--- PASS: TestNoKubernetes/serial/Start (13.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-986100 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-986100 "sudo systemctl is-active --quiet service kubelet": exit status 1 (537.8131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.8624885s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (1.9349407s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-986100
no_kubernetes_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-986100: (1.8652845s)
--- PASS: TestNoKubernetes/serial/Stop (1.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --driver=docker
E1228 07:23:40.030051   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-986100 --driver=docker: (9.6397313s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-986100 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-986100 "sudo systemctl is-active --quiet service kubelet": exit status 1 (525.2858ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-550200
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-550200: (1.3733498s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (116.65s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-362600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-362600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker: (1m42.5779439s)
preload_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-362600 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-362600 image pull ghcr.io/medyagh/image-mirrors/busybox:latest: (2.0259104s)
preload_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-362600
preload_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-362600: (12.0470985s)
--- PASS: TestPreload/Start-NoPreload-PullImage (116.65s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (49.05s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-362600 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-362600 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker: (48.5762025s)
preload_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-362600 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (49.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m29.3052794s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-410600 "pgrep -a kubelet"
I1228 07:30:51.867168   13556 config.go:182] Loaded profile config "auto-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-nnrsp" [f94a87d1-ac37-4211-ac84-b2059a505515] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-nnrsp" [f94a87d1-ac37-4211-ac84-b2059a505515] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.006219s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (59.4344589s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-410600 "pgrep -a kubelet"
I1228 07:32:38.969545   13556 config.go:182] Loaded profile config "custom-flannel-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-qf2ds" [e8f146ac-81d8-4fd7-8f76-ec9c2cb5d0f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-qf2ds" [e8f146ac-81d8-4fd7-8f76-ec9c2cb5d0f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.006654s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (121.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (2m1.6190426s)
--- PASS: TestNetworkPlugins/group/calico/Start (121.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m30.1199322s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m31.818705s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-410600 "pgrep -a kubelet"
I1228 07:35:01.048510   13556 config.go:182] Loaded profile config "enable-default-cni-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-2rmmd" [45f499c5-311f-446c-a5f2-e316cff9df20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-2rmmd" [45f499c5-311f-446c-a5f2-e316cff9df20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.0070036s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-72fhr" [ef15b9eb-1043-47e9-b225-dae52229b320] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0062359s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (95.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m35.1386327s)
--- PASS: TestNetworkPlugins/group/false/Start (95.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-410600 "pgrep -a kubelet"
I1228 07:35:13.854850   13556 config.go:182] Loaded profile config "flannel-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (24.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-410600 replace --force -f testdata\netcat-deployment.yaml: (1.0150026s)
I1228 07:35:15.070406   13556 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1228 07:35:15.115317   13556 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-fx7sp" [23ddea1e-0380-41da-ab18-0239635173a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-fx7sp" [23ddea1e-0380-41da-ab18-0239635173a8] Running
E1228 07:35:36.991704   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 23.00959s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (24.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-5hfmr" [0109fe71-0653-4036-b446-0b04697a7e60] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0064086s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-410600 "pgrep -a kubelet"
I1228 07:35:37.798099   13556 config.go:182] Loaded profile config "calico-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-fpb2g" [99363b10-710d-4901-93ce-814a640f38ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-fpb2g" [99363b10-710d-4901-93ce-814a640f38ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.0054924s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1228 07:35:53.649497   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E1228 07:35:54.930454   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:57.491210   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:36:02.612397   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m20.7517586s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m19.7892606s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (84.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-410600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m24.7967646s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (84.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-410600 "pgrep -a kubelet"
I1228 07:36:48.069494   13556 config.go:182] Loaded profile config "false-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-z2c2f" [8c72a8eb-5e8d-4c7d-b6cd-2ef549756243] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-z2c2f" [8c72a8eb-5e8d-4c7d-b6cd-2ef549756243] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.0198775s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-410600 "pgrep -a kubelet"
I1228 07:37:15.784815   13556 config.go:182] Loaded profile config "bridge-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mbd7d" [4a926c0e-fbec-4026-9779-3d1a43e9bebc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mbd7d" [4a926c0e-fbec-4026-9779-3d1a43e9bebc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.0064356s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-c9kdr" [394ebdc8-ff8b-486b-8522-07a06baf6b2c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0052416s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (101.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-038500 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
E1228 07:37:39.431395   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:39.437380   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:39.448372   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:39.469372   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:39.510375   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:39.590758   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:39.751160   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:40.072347   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:40.712875   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:41.993772   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-038500 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m41.1362164s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (101.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-410600 "pgrep -a kubelet"
I1228 07:37:43.901012   13556 config.go:182] Loaded profile config "kindnet-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (26.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-4lcg5" [76b5a3db-fed7-4d77-84f8-bd93322dcae7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1228 07:37:44.554830   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:49.675801   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-4lcg5" [76b5a3db-fed7-4d77-84f8-bd93322dcae7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 26.0072216s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (26.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-410600 "pgrep -a kubelet"
I1228 07:37:59.124315   13556 config.go:182] Loaded profile config "kubenet-410600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (17.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-410600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-nctzn" [cda6a518-cb9e-4ca2-ad3a-959c6991f60c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1228 07:37:59.916479   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-nctzn" [cda6a518-cb9e-4ca2-ad3a-959c6991f60c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 17.0064999s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (17.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-252400 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.35.0
E1228 07:38:10.195443   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-045400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-252400 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.35.0: (1m32.5534195s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-410600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-410600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-030100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-030100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0: (1m40.8680637s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-736600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.35.0
E1228 07:39:01.358786   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-736600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.35.0: (1m29.6205898s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-038500 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0d5e1c17-5105-4734-ba9a-d13b184925d2] Pending
helpers_test.go:353: "busybox" [0d5e1c17-5105-4734-ba9a-d13b184925d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0d5e1c17-5105-4734-ba9a-d13b184925d2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.0056087s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-038500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-038500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-038500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.594395s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-038500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-038500 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-038500 --alsologtostderr -v=3: (12.5673462s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-252400 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5df21a09-4a90-4d55-8722-f4c678f5d69a] Pending
helpers_test.go:353: "busybox" [5df21a09-4a90-4d55-8722-f4c678f5d69a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5df21a09-4a90-4d55-8722-f4c678f5d69a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0072652s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-252400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-038500 -n old-k8s-version-038500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-038500 -n old-k8s-version-038500: exit status 7 (226.1717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-038500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-038500 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-038500 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (53.857081s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-038500 -n old-k8s-version-038500
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-252400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-252400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.611405s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-252400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-252400 --alsologtostderr -v=3
E1228 07:40:01.648508   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:01.654554   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:01.665415   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:01.685958   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:01.726808   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:01.808039   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:01.969072   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-252400 --alsologtostderr -v=3: (12.3344831s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-252400 -n embed-certs-252400
E1228 07:40:02.290258   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-252400 -n embed-certs-252400: exit status 7 (235.6068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-252400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-252400 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.35.0
E1228 07:40:02.930908   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:04.211529   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:06.771918   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:07.275437   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:07.281172   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:07.291708   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:07.311964   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:07.352164   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:07.432736   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:07.593690   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:07.914215   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:08.555345   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:09.835891   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:11.892506   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:12.396265   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:17.516952   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:20.047247   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:22.133224   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:23.280484   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-252400 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.35.0: (56.2707769s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-252400 -n embed-certs-252400
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-736600 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a4157927-f52b-42ea-b355-80653e33e1ce] Pending
helpers_test.go:353: "busybox" [a4157927-f52b-42ea-b355-80653e33e1ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1228 07:40:27.757636   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [a4157927-f52b-42ea-b355-80653e33e1ce] Running
E1228 07:40:31.227632   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:31.233650   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:31.244631   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:31.265630   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:31.306634   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:31.387626   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:31.548884   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:31.869877   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:32.510895   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.0075645s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-736600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-030100 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f15c663c-8a9a-4cc7-8607-fecef41f0235] Pending
helpers_test.go:353: "busybox" [f15c663c-8a9a-4cc7-8607-fecef41f0235] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f15c663c-8a9a-4cc7-8607-fecef41f0235] Running
E1228 07:40:33.791708   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.0075094s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-030100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-736600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1228 07:40:36.352210   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-736600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3884234s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-736600 describe deploy/metrics-server -n kube-system
E1228 07:40:36.996813   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-561400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-736600 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-736600 --alsologtostderr -v=3: (13.1948869s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-030100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-030100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.5267975s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-030100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-030100 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-030100 --alsologtostderr -v=3: (12.2572111s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-7rfcw" [bd29f0b3-b652-405d-9ea4-a147ba792006] Running
E1228 07:40:41.472627   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:42.613952   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0185785s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-7rfcw" [bd29f0b3-b652-405d-9ea4-a147ba792006] Running
E1228 07:40:48.239087   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0067737s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-038500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600: exit status 7 (217.824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-736600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-736600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.35.0
E1228 07:40:51.713882   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:40:52.371890   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-736600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.35.0: (57.3245168s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600
E1228 07:41:48.721924   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:48.727928   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:48.738930   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-030100 -n no-preload-030100
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-030100 -n no-preload-030100: exit status 7 (226.8603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-030100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-038500 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (63.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-030100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-030100 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0: (1m2.5920947s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-030100 -n no-preload-030100
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (63.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-038500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-038500 --alsologtostderr -v=1: (1.7805547s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-038500 -n old-k8s-version-038500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-038500 -n old-k8s-version-038500: exit status 2 (2.0717023s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-038500 -n old-k8s-version-038500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-038500 -n old-k8s-version-038500: exit status 2 (871.4931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-038500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-038500 --alsologtostderr -v=1: (1.0887183s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-038500 -n old-k8s-version-038500
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-038500 -n old-k8s-version-038500
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (7.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wc8kh" [53aaa59f-aad1-46df-991d-8685c3dc3dc0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0069274s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wc8kh" [53aaa59f-aad1-46df-991d-8685c3dc3dc0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0177793s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-252400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-742900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-742900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0: (53.1523135s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-252400 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-252400 --alsologtostderr -v=1
E1228 07:41:12.194515   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-252400 --alsologtostderr -v=1: (1.29471s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-252400 -n embed-certs-252400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-252400 -n embed-certs-252400: exit status 2 (702.232ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-252400 -n embed-certs-252400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-252400 -n embed-certs-252400: exit status 2 (661.6054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-252400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-252400 --alsologtostderr -v=1: (2.0210389s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-252400 -n embed-certs-252400
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-252400 -n embed-certs-252400: (1.3891885s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-252400 -n embed-certs-252400
E1228 07:41:20.061358   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-252400 -n embed-certs-252400: (2.4986584s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (8.57s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (6.94s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-dl-gcs-720100 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker
E1228 07:41:29.199924   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-dl-gcs-720100 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker: (6.2082666s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-720100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-dl-gcs-720100
--- PASS: TestPreload/PreloadSrc/gcs (6.94s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (8.72s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-dl-github-855500 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker
preload_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-dl-github-855500 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker: (8.055205s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-855500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-dl-github-855500
--- PASS: TestPreload/PreloadSrc/github (8.72s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (1.79s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-dl-gcs-cached-210500 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker
preload_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-dl-gcs-cached-210500 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker: (1.1713678s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-210500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-dl-gcs-cached-210500
--- PASS: TestPreload/PreloadSrc/gcs-cached (1.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-dfrfv" [b0f5ea5b-4190-4cc0-8b59-e6b64ddad81e] Running
E1228 07:41:48.759932   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:48.800531   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:48.880868   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:49.041453   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:49.361931   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:50.002632   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:51.283384   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:53.156308   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:41:53.845004   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0063076s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-dfrfv" [b0f5ea5b-4190-4cc0-8b59-e6b64ddad81e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0074719s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-736600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-s4gvz" [c8af4d8e-29aa-42c8-9c6e-f13230c857ae] Running
E1228 07:41:58.965685   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0188605s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-742900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-742900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.4616749s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-736600 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-736600 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-736600 --alsologtostderr -v=1: (1.1035426s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600: exit status 2 (650.2619ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600: exit status 2 (651.4979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-736600 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-736600 --alsologtostderr -v=1: (1.1067465s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600: (1.1131418s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-736600 -n default-k8s-diff-port-736600
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-742900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-742900 --alsologtostderr -v=3: (12.532208s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-s4gvz" [c8af4d8e-29aa-42c8-9c6e-f13230c857ae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0070426s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-030100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-030100 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-030100 --alsologtostderr -v=1
E1228 07:42:09.206272   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-030100 --alsologtostderr -v=1: (1.6533674s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-030100 -n no-preload-030100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-030100 -n no-preload-030100: exit status 2 (613.3762ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-030100 -n no-preload-030100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-030100 -n no-preload-030100: exit status 2 (618.934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-030100 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-030100 -n no-preload-030100
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-030100 -n no-preload-030100
--- PASS: TestStartStop/group/no-preload/serial/Pause (5.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-742900 -n newest-cni-742900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-742900 -n newest-cni-742900: exit status 7 (201.9993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-742900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-742900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0
E1228 07:42:16.321449   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:16.327460   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:16.338450   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:16.359056   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:16.400051   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:16.480695   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:16.641699   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:16.962420   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:17.603119   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:18.883585   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-742900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0: (20.8474336s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-742900 -n newest-cni-742900
E1228 07:42:36.804643   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-742900 -n newest-cni-742900: (1.0704734s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-742900 image list --format=json
E1228 07:42:37.359410   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:37.365317   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:37.375870   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:37.396420   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:37.437478   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:37.518164   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:37.679180   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-742900 --alsologtostderr -v=1
E1228 07:42:37.999975   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:42:38.640476   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-742900 --alsologtostderr -v=1: (1.3293537s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-742900 -n newest-cni-742900
E1228 07:42:39.435393   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-742900 -n newest-cni-742900: exit status 2 (618.0934ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-742900 -n newest-cni-742900
E1228 07:42:39.921147   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-742900 -n newest-cni-742900: exit status 2 (639.5958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-742900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-742900 -n newest-cni-742900
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-742900 -n newest-cni-742900
E1228 07:42:42.482091   13556 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-410600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (5.12s)

                                                
                                    

Test skip (27/349)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (27.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 9.4491ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-fmkxm" [c962fca7-5e44-4757-9e0f-7154e33494ab] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0041703s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-2kgkh" [d1ec2b7d-8802-48c3-9a35-6db687a13322] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0062515s
addons_test.go:394: (dbg) Run:  kubectl --context addons-045400 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-045400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-045400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.1506717s)
addons_test.go:409: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable registry --alsologtostderr -v=1: (1.2661694s)
--- SKIP: TestAddons/parallel/Registry (27.64s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-045400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-045400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-045400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [ec5ab8d9-9420-4a82-adce-ef91a00a0d31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [ec5ab8d9-9420-4a82-adce-ef91a00a0d31] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0124457s
I1228 06:34:37.518178   13556 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable ingress-dns --alsologtostderr -v=1: (2.2164842s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-045400 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-045400 addons disable ingress --alsologtostderr -v=1: (9.0297213s)
--- SKIP: TestAddons/parallel/Ingress (26.09s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 0 -p functional-561400 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 0 -p functional-561400 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 10548: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:65: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-561400 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-561400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-g46pp" [a520d2d2-7cf8-4cb6-aa2e-acbf968dd272] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-g46pp" [a520d2d2-7cf8-4cb6-aa2e-acbf968dd272] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0363217s
functional_test.go:1656: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (9.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-410600 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-410600" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 07:22:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:55211
name: cert-expiration-709700
- cluster:
certificate-authority: C:/Users/jenkins.minikube4/minikube-integration/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 07:21:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:55041
name: stopped-upgrade-550200
contexts:
- context:
cluster: cert-expiration-709700
extensions:
- extension:
last-update: Sun, 28 Dec 2025 07:22:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-709700
name: cert-expiration-709700
- context:
cluster: stopped-upgrade-550200
user: stopped-upgrade-550200
name: stopped-upgrade-550200
current-context: ""
kind: Config
users:
- name: cert-expiration-709700
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-709700\client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-709700\client.key
- name: stopped-upgrade-550200
user:
client-certificate: C:/Users/jenkins.minikube4/minikube-integration/.minikube/profiles/stopped-upgrade-550200/client.crt
client-key: C:/Users/jenkins.minikube4/minikube-integration/.minikube/profiles/stopped-upgrade-550200/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-410600

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-410600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-410600"

                                                
                                                
----------------------- debugLogs end: cilium-410600 [took: 8.6688353s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-410600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-410600
--- SKIP: TestNetworkPlugins/group/cilium (9.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-487400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-487400
--- SKIP: TestStartStop/group/disable-driver-mounts (0.55s)

                                                
                                    
Copied to clipboard