Test Report: Docker_Windows 21139

                    
                      c4345f2baa4ca80c4898fac9368be2207cfcb3f0:2025-11-09:42265
                    
                

Test fail (2/345)

Order failed test Duration
58 TestErrorSpam/setup 50.63
261 TestMissingContainerUpgrade 1098.54
x
+
TestErrorSpam/setup (50.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-783100 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-783100 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 --driver=docker: (50.6326063s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-783100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=21139
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-783100" primary control-plane node in "nospam-783100" cluster
* Pulling base image v0.0.48-1761985721-21837 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-783100" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (50.63s)

                                                
                                    
x
+
TestMissingContainerUpgrade (1098.54s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.1212062662.exe start -p missing-upgrade-184300 --memory=3072 --driver=docker
E1109 14:28:12.170137   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.1212062662.exe start -p missing-upgrade-184300 --memory=3072 --driver=docker: (2m14.369053s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-184300
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-184300: (2.6316222s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-184300
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-184300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p missing-upgrade-184300 --memory=3072 --alsologtostderr -v=1 --driver=docker: exit status 109 (15m50.8279585s)

                                                
                                                
-- stdout --
	* [missing-upgrade-184300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	* Using the docker driver based on existing profile
	* Starting "missing-upgrade-184300" primary control-plane node in "missing-upgrade-184300" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* docker "missing-upgrade-184300" container is missing, will recreate.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:30:14.241367    1604 out.go:360] Setting OutFile to fd 1640 ...
	I1109 14:30:14.303531    1604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:30:14.303531    1604 out.go:374] Setting ErrFile to fd 2012...
	I1109 14:30:14.304519    1604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:30:14.319521    1604 out.go:368] Setting JSON to false
	I1109 14:30:14.323517    1604 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4564,"bootTime":1762694050,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1109 14:30:14.323517    1604 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1109 14:30:14.326520    1604 out.go:179] * [missing-upgrade-184300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1109 14:30:14.331510    1604 notify.go:221] Checking for updates...
	I1109 14:30:14.371970    1604 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1109 14:30:14.376610    1604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:30:14.379966    1604 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1109 14:30:14.394211    1604 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:30:14.399168    1604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:30:14.402153    1604 config.go:182] Loaded profile config "missing-upgrade-184300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1109 14:30:14.404160    1604 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1109 14:30:14.406158    1604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:30:14.559154    1604 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1109 14:30:14.568147    1604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:30:14.849122    1604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-11-09 14:30:14.824948872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 14:30:14.852122    1604 out.go:179] * Using the docker driver based on existing profile
	I1109 14:30:14.854122    1604 start.go:309] selected driver: docker
	I1109 14:30:14.854122    1604 start.go:930] validating driver "docker" against &{Name:missing-upgrade-184300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-184300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:30:14.854122    1604 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:30:14.924242    1604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:30:15.229473    1604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-11-09 14:30:15.207013972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 14:30:15.229473    1604 cni.go:84] Creating CNI manager for ""
	I1109 14:30:15.230472    1604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1109 14:30:15.230472    1604 start.go:353] cluster config:
	{Name:missing-upgrade-184300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-184300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:30:15.233481    1604 out.go:179] * Starting "missing-upgrade-184300" primary control-plane node in "missing-upgrade-184300" cluster
	I1109 14:30:15.235472    1604 cache.go:134] Beginning downloading kic base image for docker with docker
	I1109 14:30:15.238473    1604 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:30:15.241472    1604 preload.go:188] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1109 14:30:15.241472    1604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1109 14:30:15.241472    1604 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1109 14:30:15.241472    1604 cache.go:65] Caching tarball of preloaded images
	I1109 14:30:15.242481    1604 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 14:30:15.242481    1604 cache.go:68] Finished verifying existence of preloaded tar for v1.28.3 on docker
	I1109 14:30:15.242481    1604 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\config.json ...
	I1109 14:30:15.329761    1604 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1109 14:30:15.329761    1604 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1109 14:30:15.329761    1604 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:30:15.329761    1604 start.go:360] acquireMachinesLock for missing-upgrade-184300: {Name:mk1c95ff21e738a254f41e9850dcd0d598434226 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:30:15.329761    1604 start.go:364] duration metric: took 0s to acquireMachinesLock for "missing-upgrade-184300"
	I1109 14:30:15.329761    1604 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:30:15.329761    1604 fix.go:54] fixHost starting: 
	I1109 14:30:15.348164    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:15.408664    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:15.408664    1604 fix.go:112] recreateIfNeeded on missing-upgrade-184300: state= err=unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:15.408664    1604 fix.go:117] machineExists: false. err=machine does not exist
	I1109 14:30:15.412658    1604 out.go:179] * docker "missing-upgrade-184300" container is missing, will recreate.
	I1109 14:30:15.414667    1604 delete.go:124] DEMOLISHING missing-upgrade-184300 ...
	I1109 14:30:15.426654    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:15.480665    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	W1109 14:30:15.480665    1604 stop.go:83] unable to get state: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:15.480665    1604 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:15.497658    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:15.555656    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:15.555656    1604 delete.go:82] Unable to get host status for missing-upgrade-184300, assuming it has already been deleted: state: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:15.564655    1604 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-184300
	W1109 14:30:15.623668    1604 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-184300 returned with exit code 1
	I1109 14:30:15.623668    1604 kic.go:371] could not find the container missing-upgrade-184300 to remove it. will try anyways
	I1109 14:30:15.629670    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:15.688107    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	W1109 14:30:15.688107    1604 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:15.695104    1604 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-184300 /bin/bash -c "sudo init 0"
	W1109 14:30:15.751103    1604 cli_runner.go:211] docker exec --privileged -t missing-upgrade-184300 /bin/bash -c "sudo init 0" returned with exit code 1
	I1109 14:30:15.751103    1604 oci.go:659] error shutdown missing-upgrade-184300: docker exec --privileged -t missing-upgrade-184300 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:16.757693    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:16.811373    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:16.812373    1604 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:16.812373    1604 oci.go:673] temporary error: container missing-upgrade-184300 status is  but expect it to be exited
	I1109 14:30:16.812373    1604 retry.go:31] will retry after 727.654787ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:17.547085    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:17.597254    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:17.597254    1604 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:17.597254    1604 oci.go:673] temporary error: container missing-upgrade-184300 status is  but expect it to be exited
	I1109 14:30:17.597254    1604 retry.go:31] will retry after 398.828297ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:18.001899    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:18.050285    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:18.050285    1604 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:18.050285    1604 oci.go:673] temporary error: container missing-upgrade-184300 status is  but expect it to be exited
	I1109 14:30:18.050285    1604 retry.go:31] will retry after 642.519638ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:18.699359    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:18.745360    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:18.745360    1604 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:18.745360    1604 oci.go:673] temporary error: container missing-upgrade-184300 status is  but expect it to be exited
	I1109 14:30:18.745360    1604 retry.go:31] will retry after 1.440797248s: couldn't verify container is exited. %v: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:20.193798    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:20.245975    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:20.245975    1604 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:20.245975    1604 oci.go:673] temporary error: container missing-upgrade-184300 status is  but expect it to be exited
	I1109 14:30:20.245975    1604 retry.go:31] will retry after 2.774575097s: couldn't verify container is exited. %v: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:23.028148    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:23.088942    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:23.088942    1604 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:23.088942    1604 oci.go:673] temporary error: container missing-upgrade-184300 status is  but expect it to be exited
	I1109 14:30:23.088942    1604 retry.go:31] will retry after 4.240507195s: couldn't verify container is exited. %v: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:27.338244    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:27.387071    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:27.387071    1604 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:27.387071    1604 oci.go:673] temporary error: container missing-upgrade-184300 status is  but expect it to be exited
	I1109 14:30:27.387071    1604 retry.go:31] will retry after 6.892236651s: couldn't verify container is exited. %v: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:34.285684    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	W1109 14:30:34.336685    1604 cli_runner.go:211] docker container inspect missing-upgrade-184300 --format={{.State.Status}} returned with exit code 1
	I1109 14:30:34.337701    1604 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	I1109 14:30:34.337701    1604 oci.go:673] temporary error: container missing-upgrade-184300 status is  but expect it to be exited
	I1109 14:30:34.337701    1604 oci.go:88] couldn't shut down missing-upgrade-184300 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-184300": docker container inspect missing-upgrade-184300 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-184300
	 
	I1109 14:30:34.343689    1604 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-184300
	I1109 14:30:34.408704    1604 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-184300
	W1109 14:30:34.474863    1604 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-184300 returned with exit code 1
	I1109 14:30:34.480861    1604 cli_runner.go:164] Run: docker network inspect missing-upgrade-184300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:30:34.551430    1604 cli_runner.go:164] Run: docker network rm missing-upgrade-184300
	I1109 14:30:34.927962    1604 fix.go:124] Sleeping 1 second for extra luck!
	I1109 14:30:35.928335    1604 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:30:35.930827    1604 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:30:35.931482    1604 start.go:159] libmachine.API.Create for "missing-upgrade-184300" (driver="docker")
	I1109 14:30:35.931482    1604 client.go:173] LocalClient.Create starting
	I1109 14:30:35.932119    1604 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1109 14:30:35.932559    1604 main.go:143] libmachine: Decoding PEM data...
	I1109 14:30:35.932647    1604 main.go:143] libmachine: Parsing certificate...
	I1109 14:30:35.932874    1604 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1109 14:30:35.932874    1604 main.go:143] libmachine: Decoding PEM data...
	I1109 14:30:35.932874    1604 main.go:143] libmachine: Parsing certificate...
	I1109 14:30:35.944153    1604 cli_runner.go:164] Run: docker network inspect missing-upgrade-184300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:30:35.999731    1604 cli_runner.go:211] docker network inspect missing-upgrade-184300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:30:36.006738    1604 network_create.go:284] running [docker network inspect missing-upgrade-184300] to gather additional debugging logs...
	I1109 14:30:36.006738    1604 cli_runner.go:164] Run: docker network inspect missing-upgrade-184300
	W1109 14:30:36.062717    1604 cli_runner.go:211] docker network inspect missing-upgrade-184300 returned with exit code 1
	I1109 14:30:36.062717    1604 network_create.go:287] error running [docker network inspect missing-upgrade-184300]: docker network inspect missing-upgrade-184300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-184300 not found
	I1109 14:30:36.062717    1604 network_create.go:289] output of [docker network inspect missing-upgrade-184300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-184300 not found
	
	** /stderr **
	I1109 14:30:36.071725    1604 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:30:36.169252    1604 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:30:36.200410    1604 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:30:36.231255    1604 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:30:36.245705    1604 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b53740}
	I1109 14:30:36.245705    1604 network_create.go:124] attempt to create docker network missing-upgrade-184300 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 14:30:36.251708    1604 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-184300 missing-upgrade-184300
	W1109 14:30:36.301717    1604 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-184300 missing-upgrade-184300 returned with exit code 1
	W1109 14:30:36.301717    1604 network_create.go:149] failed to create docker network missing-upgrade-184300 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-184300 missing-upgrade-184300: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1109 14:30:36.301717    1604 network_create.go:116] failed to create docker network missing-upgrade-184300 192.168.76.0/24, will retry: subnet is taken
	I1109 14:30:36.325720    1604 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:30:36.356506    1604 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:30:36.372064    1604 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:30:36.387961    1604 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:30:36.403313    1604 network.go:209] skipping subnet 192.168.112.0/24 that is reserved: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:30:36.416783    1604 network.go:206] using free private subnet 192.168.121.0/24: &{IP:192.168.121.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.121.0/24 Gateway:192.168.121.1 ClientMin:192.168.121.2 ClientMax:192.168.121.254 Broadcast:192.168.121.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ccc4e0}
	I1109 14:30:36.416783    1604 network_create.go:124] attempt to create docker network missing-upgrade-184300 192.168.121.0/24 with gateway 192.168.121.1 and MTU of 1500 ...
	I1109 14:30:36.422329    1604 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.121.0/24 --gateway=192.168.121.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-184300 missing-upgrade-184300
	I1109 14:30:36.570706    1604 network_create.go:108] docker network missing-upgrade-184300 192.168.121.0/24 created
	I1109 14:30:36.570706    1604 kic.go:121] calculated static IP "192.168.121.2" for the "missing-upgrade-184300" container
	I1109 14:30:36.593320    1604 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:30:36.659991    1604 cli_runner.go:164] Run: docker volume create missing-upgrade-184300 --label name.minikube.sigs.k8s.io=missing-upgrade-184300 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:30:36.706998    1604 oci.go:103] Successfully created a docker volume missing-upgrade-184300
	I1109 14:30:36.712989    1604 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-184300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-184300 --entrypoint /usr/bin/test -v missing-upgrade-184300:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1109 14:30:37.826399    1604 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-184300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-184300 --entrypoint /usr/bin/test -v missing-upgrade-184300:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.1133968s)
	I1109 14:30:37.826399    1604 oci.go:107] Successfully prepared a docker volume missing-upgrade-184300
	I1109 14:30:37.826399    1604 preload.go:188] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1109 14:30:37.826399    1604 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:30:37.834650    1604 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-184300:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:30:52.077032    1604 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-184300:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (14.2421817s)
	I1109 14:30:52.077032    1604 kic.go:203] duration metric: took 14.2504732s to extract preloaded images to volume ...
	I1109 14:30:52.088040    1604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:30:52.365759    1604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:85 SystemTime:2025-11-09 14:30:52.346940382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 14:30:52.371750    1604 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:30:52.692841    1604 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-184300 --name missing-upgrade-184300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-184300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-184300 --network missing-upgrade-184300 --ip 192.168.121.2 --volume missing-upgrade-184300:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1109 14:30:53.783167    1604 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-184300 --name missing-upgrade-184300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-184300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-184300 --network missing-upgrade-184300 --ip 192.168.121.2 --volume missing-upgrade-184300:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0: (1.090314s)
	I1109 14:30:53.792516    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Running}}
	I1109 14:30:53.859508    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	I1109 14:30:53.927527    1604 cli_runner.go:164] Run: docker exec missing-upgrade-184300 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:30:54.051512    1604 oci.go:144] the created container "missing-upgrade-184300" has a running status.
	I1109 14:30:54.051512    1604 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa...
	I1109 14:30:54.313162    1604 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:30:54.414166    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	I1109 14:30:54.492158    1604 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:30:54.492158    1604 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-184300 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:30:54.657872    1604 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa...
	I1109 14:30:56.997073    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	I1109 14:30:57.055217    1604 machine.go:94] provisionDockerMachine start ...
	I1109 14:30:57.061734    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:30:57.133118    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:30:57.133118    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:30:57.133118    1604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:30:57.307086    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: missing-upgrade-184300
	
	I1109 14:30:57.307159    1604 ubuntu.go:182] provisioning hostname "missing-upgrade-184300"
	I1109 14:30:57.313512    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:30:57.375146    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:30:57.376145    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:30:57.376145    1604 main.go:143] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-184300 && echo "missing-upgrade-184300" | sudo tee /etc/hostname
	I1109 14:30:57.549129    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: missing-upgrade-184300
	
	I1109 14:30:57.558613    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:30:57.613523    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:30:57.614515    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:30:57.614515    1604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-184300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-184300/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-184300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:30:57.777413    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:30:57.777413    1604 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1109 14:30:57.777413    1604 ubuntu.go:190] setting up certificates
	I1109 14:30:57.777413    1604 provision.go:84] configureAuth start
	I1109 14:30:57.784414    1604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-184300
	I1109 14:30:57.844320    1604 provision.go:143] copyHostCerts
	I1109 14:30:57.844320    1604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1109 14:30:57.844320    1604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1109 14:30:57.845115    1604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1109 14:30:57.846049    1604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1109 14:30:57.846091    1604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1109 14:30:57.846362    1604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1109 14:30:57.847185    1604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1109 14:30:57.847268    1604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1109 14:30:57.847565    1604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1109 14:30:57.848248    1604 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.missing-upgrade-184300 san=[127.0.0.1 192.168.121.2 localhost minikube missing-upgrade-184300]
	I1109 14:30:58.246543    1604 provision.go:177] copyRemoteCerts
	I1109 14:30:58.254535    1604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:30:58.260528    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:30:58.316532    1604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa Username:docker}
	I1109 14:30:58.453967    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:30:58.509171    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:30:58.566202    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:30:58.598203    1604 provision.go:87] duration metric: took 820.7803ms to configureAuth
	I1109 14:30:58.599208    1604 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:30:58.599208    1604 config.go:182] Loaded profile config "missing-upgrade-184300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1109 14:30:58.605200    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:30:58.661245    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:30:58.662025    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:30:58.662062    1604 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 14:30:58.817301    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 14:30:58.817301    1604 ubuntu.go:71] root file system type: overlay
	I1109 14:30:58.817301    1604 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 14:30:58.823304    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:30:58.884424    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:30:58.884878    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:30:58.884978    1604 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 14:30:59.069110    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 14:30:59.076576    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:30:59.142324    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:30:59.142324    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:30:59.142324    1604 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 14:30:59.934480    1604 main.go:143] libmachine: SSH cmd err, output: Process exited with status 1: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-11-09 14:30:59.056093801 +0000
	@@ -1,30 +1,31 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	+After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	 Wants=network-online.target containerd.service
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,9 +33,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	I1109 14:30:59.934524    1604 ubuntu.go:208] Error setting container-runtime options during provisioning ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-11-09 14:30:59.056093801 +0000
	@@ -1,30 +1,31 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	+After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	 Wants=network-online.target containerd.service
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,9 +33,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I1109 14:30:59.934853    1604 machine.go:97] duration metric: took 2.8795588s to provisionDockerMachine
	I1109 14:30:59.934907    1604 client.go:176] duration metric: took 24.0031559s to LocalClient.Create
	I1109 14:31:01.941966    1604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:31:01.947982    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:02.011520    1604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa Username:docker}
	I1109 14:31:02.138317    1604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:31:02.146884    1604 start.go:128] duration metric: took 26.2181429s to createHost
	I1109 14:31:02.154968    1604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:31:02.160893    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:02.222871    1604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa Username:docker}
	I1109 14:31:02.336874    1604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:31:02.346867    1604 fix.go:56] duration metric: took 47.0165789s for fixHost
	I1109 14:31:02.346867    1604 start.go:83] releasing machines lock for "missing-upgrade-184300", held for 47.0165789s
	W1109 14:31:02.346867    1604 start.go:715] error starting host: recreate: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-11-09 14:30:59.056093801 +0000
	@@ -1,30 +1,31 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	+After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	 Wants=network-online.target containerd.service
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,9 +33,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	W1109 14:31:02.347868    1604 out.go:285] ! StartHost failed, but will try again: recreate: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-11-09 14:30:59.056093801 +0000
	@@ -1,30 +1,31 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	+After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	 Wants=network-online.target containerd.service
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,9 +33,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	! StartHost failed, but will try again: recreate: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-11-09 14:30:59.056093801 +0000
	@@ -1,30 +1,31 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	+After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	 Wants=network-online.target containerd.service
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,9 +33,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	I1109 14:31:02.347868    1604 start.go:730] Will try again in 5 seconds ...
	I1109 14:31:07.348311    1604 start.go:360] acquireMachinesLock for missing-upgrade-184300: {Name:mk1c95ff21e738a254f41e9850dcd0d598434226 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:31:07.348311    1604 start.go:364] duration metric: took 0s to acquireMachinesLock for "missing-upgrade-184300"
	I1109 14:31:07.348311    1604 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:31:07.348311    1604 fix.go:54] fixHost starting: 
	I1109 14:31:07.367296    1604 cli_runner.go:164] Run: docker container inspect missing-upgrade-184300 --format={{.State.Status}}
	I1109 14:31:07.432286    1604 fix.go:112] recreateIfNeeded on missing-upgrade-184300: state=Running err=<nil>
	W1109 14:31:07.432286    1604 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:31:07.436319    1604 out.go:252] * Updating the running docker "missing-upgrade-184300" container ...
	I1109 14:31:07.436319    1604 machine.go:94] provisionDockerMachine start ...
	I1109 14:31:07.444304    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:07.519308    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:31:07.520288    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:31:07.520288    1604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:31:07.710311    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: missing-upgrade-184300
	
	I1109 14:31:07.710311    1604 ubuntu.go:182] provisioning hostname "missing-upgrade-184300"
	I1109 14:31:07.717302    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:07.785285    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:31:07.785285    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:31:07.785285    1604 main.go:143] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-184300 && echo "missing-upgrade-184300" | sudo tee /etc/hostname
	I1109 14:31:07.963930    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: missing-upgrade-184300
	
	I1109 14:31:07.970500    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:08.033794    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:31:08.034796    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:31:08.034796    1604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-184300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-184300/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-184300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:31:08.208732    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:31:08.208732    1604 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1109 14:31:08.208732    1604 ubuntu.go:190] setting up certificates
	I1109 14:31:08.208732    1604 provision.go:84] configureAuth start
	I1109 14:31:08.214727    1604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-184300
	I1109 14:31:08.278548    1604 provision.go:143] copyHostCerts
	I1109 14:31:08.278548    1604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1109 14:31:08.278548    1604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1109 14:31:08.279269    1604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1109 14:31:08.280011    1604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1109 14:31:08.280011    1604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1109 14:31:08.280011    1604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1109 14:31:08.281159    1604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1109 14:31:08.281159    1604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1109 14:31:08.281159    1604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1109 14:31:08.282019    1604 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.missing-upgrade-184300 san=[127.0.0.1 192.168.121.2 localhost minikube missing-upgrade-184300]
	I1109 14:31:08.780051    1604 provision.go:177] copyRemoteCerts
	I1109 14:31:08.789054    1604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:31:08.795332    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:08.852525    1604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa Username:docker}
	I1109 14:31:08.966378    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:31:09.005403    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:31:09.045759    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:31:09.096383    1604 provision.go:87] duration metric: took 887.6409ms to configureAuth
	I1109 14:31:09.096383    1604 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:31:09.097387    1604 config.go:182] Loaded profile config "missing-upgrade-184300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1109 14:31:09.106780    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:09.161940    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:31:09.161940    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:31:09.161940    1604 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 14:31:09.321177    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 14:31:09.321177    1604 ubuntu.go:71] root file system type: overlay
	I1109 14:31:09.321177    1604 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 14:31:09.327344    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:09.392762    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:31:09.392952    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:31:09.393531    1604 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 14:31:09.566749    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 14:31:09.575344    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:09.633664    1604 main.go:143] libmachine: Using SSH client type: native
	I1109 14:31:09.633664    1604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 51764 <nil> <nil>}
	I1109 14:31:09.634672    1604 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 14:31:09.796597    1604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:31:09.796649    1604 machine.go:97] duration metric: took 2.3603038s to provisionDockerMachine
	I1109 14:31:09.796649    1604 start.go:293] postStartSetup for "missing-upgrade-184300" (driver="docker")
	I1109 14:31:09.796695    1604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:31:09.804144    1604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:31:09.809747    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:09.873417    1604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa Username:docker}
	I1109 14:31:09.991967    1604 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:31:10.002954    1604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:31:10.002954    1604 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 14:31:10.002954    1604 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 14:31:10.002954    1604 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1109 14:31:10.002954    1604 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1109 14:31:10.002954    1604 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1109 14:31:10.003946    1604 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem -> 103362.pem in /etc/ssl/certs
	I1109 14:31:10.010948    1604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:31:10.024945    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem --> /etc/ssl/certs/103362.pem (1708 bytes)
	I1109 14:31:10.061542    1604 start.go:296] duration metric: took 264.8895ms for postStartSetup
	I1109 14:31:10.070534    1604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:31:10.078814    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:10.143625    1604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa Username:docker}
	I1109 14:31:10.258245    1604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:31:10.273209    1604 fix.go:56] duration metric: took 2.9248117s for fixHost
	I1109 14:31:10.273209    1604 start.go:83] releasing machines lock for "missing-upgrade-184300", held for 2.924865s
	I1109 14:31:10.278616    1604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-184300
	I1109 14:31:10.332617    1604 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1109 14:31:10.339624    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:10.339624    1604 ssh_runner.go:195] Run: cat /version.json
	I1109 14:31:10.346613    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:10.402610    1604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa Username:docker}
	I1109 14:31:10.403616    1604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-184300\id_rsa Username:docker}
	W1109 14:31:10.506524    1604 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	W1109 14:31:10.516986    1604 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.32.0 -> Actual minikube version: v1.37.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.32.0 -> Actual minikube version: v1.37.0
	I1109 14:31:10.526344    1604 ssh_runner.go:195] Run: systemctl --version
	I1109 14:31:10.544275    1604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 14:31:10.562278    1604 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1109 14:31:10.576613    1604 start.go:440] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I1109 14:31:10.587979    1604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1109 14:31:10.617839    1604 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1109 14:31:10.617839    1604 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1109 14:31:21.084937    1604 ssh_runner.go:235] Completed: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;: (10.4967742s)
	I1109 14:31:21.084984    1604 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:31:21.084984    1604 start.go:496] detecting cgroup driver to use...
	I1109 14:31:21.085056    1604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:31:21.085221    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:31:21.118198    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1109 14:31:21.144881    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1109 14:31:21.165468    1604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1109 14:31:21.175535    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1109 14:31:21.197976    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1109 14:31:21.220693    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1109 14:31:21.271151    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1109 14:31:21.296764    1604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:31:21.492844    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1109 14:31:21.519701    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1109 14:31:21.544372    1604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1109 14:31:21.580294    1604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:31:21.604225    1604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:31:21.626431    1604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:31:21.772745    1604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1109 14:31:21.928773    1604 start.go:496] detecting cgroup driver to use...
	I1109 14:31:21.928773    1604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:31:21.937760    1604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 14:31:21.963763    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:31:21.988758    1604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:31:22.068225    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:31:22.099468    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 14:31:22.117441    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:31:22.156758    1604 ssh_runner.go:195] Run: which cri-dockerd
	I1109 14:31:22.175773    1604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 14:31:22.190747    1604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1109 14:31:22.222754    1604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 14:31:22.372459    1604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 14:31:22.500138    1604 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1109 14:31:22.500138    1604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1109 14:31:22.535129    1604 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1109 14:31:22.567711    1604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:31:22.750712    1604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 14:31:23.595781    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:31:23.626983    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1109 14:31:23.675504    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1109 14:31:23.704279    1604 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1109 14:31:23.847741    1604 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 14:31:23.998240    1604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:31:24.141337    1604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1109 14:31:24.176905    1604 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1109 14:31:24.202735    1604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:31:24.342960    1604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1109 14:31:24.625358    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1109 14:31:24.643357    1604 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 14:31:24.653368    1604 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 14:31:24.661357    1604 start.go:564] Will wait 60s for crictl version
	I1109 14:31:24.669375    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:31:24.684378    1604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 14:31:24.773351    1604 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1109 14:31:24.779352    1604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 14:31:24.825351    1604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 14:31:24.872352    1604 out.go:252] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
	I1109 14:31:24.878351    1604 cli_runner.go:164] Run: docker exec -t missing-upgrade-184300 dig +short host.docker.internal
	I1109 14:31:25.013367    1604 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1109 14:31:25.023360    1604 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1109 14:31:25.031371    1604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:31:25.063362    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:25.123356    1604 kubeadm.go:884] updating cluster {Name:missing-upgrade-184300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-184300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:31:25.123356    1604 preload.go:188] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1109 14:31:25.130368    1604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 14:31:25.178358    1604 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 14:31:25.178358    1604 docker.go:621] Images already preloaded, skipping extraction
	I1109 14:31:25.183356    1604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 14:31:25.213358    1604 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 14:31:25.213358    1604 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:31:25.213358    1604 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.3 docker true true} ...
	I1109 14:31:25.213358    1604 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=missing-upgrade-184300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-184300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:31:25.219361    1604 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 14:31:25.313363    1604 cni.go:84] Creating CNI manager for ""
	I1109 14:31:25.313363    1604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1109 14:31:25.313363    1604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:31:25.313363    1604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-184300 NodeName:missing-upgrade-184300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:31:25.313363    1604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "missing-upgrade-184300"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:31:25.321357    1604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1109 14:31:25.336378    1604 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:31:25.344376    1604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:31:25.362377    1604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1109 14:31:25.512837    1604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:31:25.543127    1604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1109 14:31:25.575126    1604 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:31:25.582430    1604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:31:25.610162    1604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:31:25.737498    1604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:31:25.757508    1604 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300 for IP: 192.168.103.2
	I1109 14:31:25.757508    1604 certs.go:195] generating shared ca certs ...
	I1109 14:31:25.758509    1604 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:31:25.758509    1604 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1109 14:31:25.758509    1604 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1109 14:31:25.758509    1604 certs.go:257] generating profile certs ...
	I1109 14:31:25.759501    1604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\client.key
	I1109 14:31:25.759501    1604 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.key.fdb2592e
	I1109 14:31:25.759501    1604 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.crt.fdb2592e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1109 14:31:25.926372    1604 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.crt.fdb2592e ...
	I1109 14:31:25.926372    1604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.crt.fdb2592e: {Name:mk458ae50fbcd921fa223c0b069124d505ffe9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:31:25.927251    1604 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.key.fdb2592e ...
	I1109 14:31:25.927251    1604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.key.fdb2592e: {Name:mk133cb75a6b83bdee4dd9e23bfcba5ad80c93c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:31:25.929111    1604 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.crt.fdb2592e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.crt
	I1109 14:31:25.947869    1604 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.key.fdb2592e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.key
	I1109 14:31:25.948982    1604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\proxy-client.key
	I1109 14:31:25.950705    1604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336.pem (1338 bytes)
	W1109 14:31:25.950865    1604 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336_empty.pem, impossibly tiny 0 bytes
	I1109 14:31:25.950865    1604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1109 14:31:25.950865    1604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1109 14:31:25.951402    1604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1109 14:31:25.951580    1604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1109 14:31:25.952021    1604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem (1708 bytes)
	I1109 14:31:25.953404    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:31:25.996673    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:31:26.036716    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:31:26.081613    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:31:26.114615    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1109 14:31:26.147776    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:31:26.190792    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:31:26.226776    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:31:26.274820    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336.pem --> /usr/share/ca-certificates/10336.pem (1338 bytes)
	I1109 14:31:26.314717    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem --> /usr/share/ca-certificates/103362.pem (1708 bytes)
	I1109 14:31:26.353895    1604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:31:26.399812    1604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:31:26.450640    1604 ssh_runner.go:195] Run: openssl version
	I1109 14:31:26.468639    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103362.pem && ln -fs /usr/share/ca-certificates/103362.pem /etc/ssl/certs/103362.pem"
	I1109 14:31:26.492640    1604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103362.pem
	I1109 14:31:26.502863    1604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:39 /usr/share/ca-certificates/103362.pem
	I1109 14:31:26.510553    1604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103362.pem
	I1109 14:31:26.539576    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103362.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:31:26.564862    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:31:26.588869    1604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:31:26.597885    1604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:31 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:31:26.608873    1604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:31:26.625863    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:31:26.650865    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10336.pem && ln -fs /usr/share/ca-certificates/10336.pem /etc/ssl/certs/10336.pem"
	I1109 14:31:26.674861    1604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10336.pem
	I1109 14:31:26.681862    1604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:39 /usr/share/ca-certificates/10336.pem
	I1109 14:31:26.687862    1604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10336.pem
	I1109 14:31:26.706260    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10336.pem /etc/ssl/certs/51391683.0"
	I1109 14:31:26.728549    1604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:31:26.755350    1604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:31:26.776832    1604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:31:26.800541    1604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:31:26.816578    1604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:31:26.841429    1604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:31:26.859424    1604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:31:26.870425    1604 kubeadm.go:401] StartCluster: {Name:missing-upgrade-184300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-184300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:31:26.875423    1604 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 14:31:26.912321    1604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1109 14:31:26.925889    1604 kubeadm.go:414] apiserver tunnel failed: apiserver port not set
	I1109 14:31:26.925926    1604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:31:26.925926    1604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:31:26.935064    1604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:31:26.954966    1604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:31:26.961492    1604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" missing-upgrade-184300
	I1109 14:31:27.018234    1604 kubeconfig.go:125] found "missing-upgrade-184300" server: "https://127.0.0.1:51542"
	I1109 14:31:27.018287    1604 kubeconfig.go:47] verify endpoint returned: got: 127.0.0.1:51542, want: 127.0.0.1:51763
	I1109 14:31:27.018897    1604 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig needs server address update]
	I1109 14:31:27.020013    1604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:31:27.037540    1604 kapi.go:59] client config for missing-upgrade-184300: &rest.Config{Host:"https://127.0.0.1:51763", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\missing-upgrade-184300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\missing-upgrade-184300\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x30e6080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:31:27.038539    1604 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 14:31:27.038539    1604 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 14:31:27.038539    1604 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 14:31:27.038539    1604 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 14:31:27.038539    1604 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 14:31:27.045536    1604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:31:27.063157    1604 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-09 14:29:44.523872657 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-09 14:31:25.556093299 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1109 14:31:27.063157    1604 kubeadm.go:1161] stopping kube-system containers ...
	I1109 14:31:27.069301    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 14:31:27.102726    1604 docker.go:484] Stopping containers: [dcfa57dcafaf 06bbd18e7eee f5f64b2f8b66 b0b7529d23ff 67f5e9ab21d3 90b3c5d8d946 857f553dedee d608fc01fdeb]
	I1109 14:31:27.111628    1604 ssh_runner.go:195] Run: docker stop dcfa57dcafaf 06bbd18e7eee f5f64b2f8b66 b0b7529d23ff 67f5e9ab21d3 90b3c5d8d946 857f553dedee d608fc01fdeb
	I1109 14:31:27.158177    1604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 14:31:27.186990    1604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:31:27.202507    1604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:31:27.202507    1604 kubeadm.go:158] found existing configuration files:
	
	I1109 14:31:27.209952    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1109 14:31:27.224043    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:31:27.230043    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:31:27.250042    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1109 14:31:27.264046    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:31:27.270049    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:31:27.291043    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1109 14:31:27.305053    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:31:27.312049    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:31:27.334045    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1109 14:31:27.348052    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:31:27.354049    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:31:27.375045    1604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:31:27.396044    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:31:27.477051    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:31:29.232542    1604 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.7554711s)
	I1109 14:31:29.241535    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:31:29.496157    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:31:29.603186    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:31:29.729974    1604 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:31:29.741540    1604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:31:30.242716    1604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:31:30.738656    1604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:31:31.237623    1604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:31:31.741354    1604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:31:31.830811    1604 api_server.go:72] duration metric: took 2.1007222s to wait for apiserver process to appear ...
	I1109 14:31:31.830889    1604 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:31:31.830939    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:31.834255    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:32.331230    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:37.332470    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:31:37.332470    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:42.332773    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:31:42.332773    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:47.333890    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:31:47.333890    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:52.334734    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:31:52.334734    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:52.953055    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:52.953055    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:52.956595    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:53.332159    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:53.335657    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:53.832085    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:53.836188    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:54.332154    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:54.334687    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:54.831889    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:54.835184    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:55.331973    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:55.334851    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:55.832210    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:55.835414    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:56.331401    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:56.333080    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:56.832762    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:56.836298    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:57.332235    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:57.334251    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:57.832117    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:57.834603    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:58.331988    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:58.334984    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:58.832340    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:58.835323    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:59.331925    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:59.333936    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:31:59.832640    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:31:59.834629    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:00.332982    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:00.335473    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:00.832085    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:00.834091    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:01.332414    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:01.334421    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:01.832135    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:01.834142    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:02.332307    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:02.336048    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:02.831505    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:02.833521    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:03.331962    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:03.333944    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:03.831903    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:03.834073    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:04.331665    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:09.331926    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:32:09.332030    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:14.333618    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:32:14.333618    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:19.334119    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:32:19.334119    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:24.100869    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:24.100869    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:24.104811    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:24.332367    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:24.335760    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:24.831845    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:24.836226    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:25.332221    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:25.335653    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:25.832346    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:25.834921    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:26.332208    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:26.336176    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:26.832498    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:26.835708    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:27.332535    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:27.336765    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:27.831995    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:27.835327    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:28.331736    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:28.334981    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:28.832842    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:28.836449    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:29.332043    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:29.335057    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:29.831982    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:29.834773    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:30.332401    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:30.336278    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:30.831723    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:30.835318    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:31.331766    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:31.335358    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:31.841052    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:32:31.881498    1604 logs.go:282] 1 containers: [3b408642cd69]
	I1109 14:32:31.889309    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:32:31.920648    1604 logs.go:282] 1 containers: [348147c99d77]
	I1109 14:32:31.927708    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:32:31.959150    1604 logs.go:282] 0 containers: []
	W1109 14:32:31.959150    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:32:31.966063    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:32:31.997469    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:32:32.004053    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:32:32.047237    1604 logs.go:282] 0 containers: []
	W1109 14:32:32.047237    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:32:32.055564    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:32:32.088418    1604 logs.go:282] 2 containers: [25752fe0c294 b0b7529d23ff]
	I1109 14:32:32.094186    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:32:32.126237    1604 logs.go:282] 0 containers: []
	W1109 14:32:32.126237    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:32:32.133165    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:32:32.163341    1604 logs.go:282] 0 containers: []
	W1109 14:32:32.163393    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:32:32.163393    1604 logs.go:123] Gathering logs for kube-apiserver [3b408642cd69] ...
	I1109 14:32:32.163487    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b408642cd69"
	I1109 14:32:32.205463    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:32:32.205504    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:32:32.237196    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:32:32.237276    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:32:32.319575    1604 logs.go:123] Gathering logs for etcd [348147c99d77] ...
	I1109 14:32:32.319652    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348147c99d77"
	I1109 14:32:32.357516    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:32:32.357516    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:32:32.391465    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:32:32.391465    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:32:32.432189    1604 logs.go:123] Gathering logs for kube-controller-manager [b0b7529d23ff] ...
	I1109 14:32:32.432189    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0b7529d23ff"
	I1109 14:32:32.471578    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:32:32.471659    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:32:32.501652    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:32:32.501652    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:32:32.568630    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:32:32.568630    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:32:32.597337    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:32:32.597337    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:32:32.686932    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:32:32.678335    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:32.679202    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:32.681890    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:32.683146    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:32.684774    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:32:32.678335    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:32.679202    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:32.681890    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:32.683146    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:32.684774    3148 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:32:35.187285    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:35.191311    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:35.197913    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:32:35.229756    1604 logs.go:282] 1 containers: [3b408642cd69]
	I1109 14:32:35.236039    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:32:35.267534    1604 logs.go:282] 1 containers: [348147c99d77]
	I1109 14:32:35.274108    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:32:35.305960    1604 logs.go:282] 0 containers: []
	W1109 14:32:35.305960    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:32:35.312314    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:32:35.343933    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:32:35.350745    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:32:35.388915    1604 logs.go:282] 0 containers: []
	W1109 14:32:35.388915    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:32:35.394907    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:32:35.426910    1604 logs.go:282] 2 containers: [25752fe0c294 b0b7529d23ff]
	I1109 14:32:35.431916    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:32:35.460904    1604 logs.go:282] 0 containers: []
	W1109 14:32:35.460904    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:32:35.466910    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:32:35.496022    1604 logs.go:282] 0 containers: []
	W1109 14:32:35.496022    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:32:35.496022    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:32:35.496022    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:32:35.521026    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:32:35.521026    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:32:35.625801    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:32:35.606851    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:35.608736    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:35.610417    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:35.612254    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:35.613741    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:32:35.606851    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:35.608736    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:35.610417    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:35.612254    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:35.613741    3247 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:32:35.625801    1604 logs.go:123] Gathering logs for etcd [348147c99d77] ...
	I1109 14:32:35.625801    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348147c99d77"
	I1109 14:32:35.670230    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:32:35.670230    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:32:35.709159    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:32:35.709159    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:32:35.757970    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:32:35.757970    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:32:35.790280    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:32:35.790316    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:32:35.860504    1604 logs.go:123] Gathering logs for kube-apiserver [3b408642cd69] ...
	I1109 14:32:35.860504    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b408642cd69"
	I1109 14:32:35.914233    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:32:35.914233    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:32:35.955902    1604 logs.go:123] Gathering logs for kube-controller-manager [b0b7529d23ff] ...
	I1109 14:32:35.955902    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0b7529d23ff"
	I1109 14:32:35.994901    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:32:35.994901    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:32:38.588821    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:38.592098    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:38.598518    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:32:38.631033    1604 logs.go:282] 1 containers: [3b408642cd69]
	I1109 14:32:38.639643    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:32:38.677462    1604 logs.go:282] 1 containers: [348147c99d77]
	I1109 14:32:38.683698    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:32:38.712859    1604 logs.go:282] 0 containers: []
	W1109 14:32:38.712859    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:32:38.720677    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:32:38.753687    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:32:38.763063    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:32:38.792728    1604 logs.go:282] 0 containers: []
	W1109 14:32:38.792728    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:32:38.803398    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:32:38.836913    1604 logs.go:282] 2 containers: [25752fe0c294 b0b7529d23ff]
	I1109 14:32:38.844588    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:32:38.878946    1604 logs.go:282] 0 containers: []
	W1109 14:32:38.878946    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:32:38.884933    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:32:38.913225    1604 logs.go:282] 0 containers: []
	W1109 14:32:38.913225    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:32:38.913225    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:32:38.913225    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:32:38.959992    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:32:38.959992    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:32:39.046279    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:32:39.046279    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:32:39.136674    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:32:39.122856    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:39.123731    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:39.126188    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:39.127000    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:39.128509    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:32:39.122856    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:39.123731    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:39.126188    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:39.127000    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:39.128509    3446 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:32:39.136674    1604 logs.go:123] Gathering logs for kube-apiserver [3b408642cd69] ...
	I1109 14:32:39.136674    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b408642cd69"
	I1109 14:32:39.173658    1604 logs.go:123] Gathering logs for etcd [348147c99d77] ...
	I1109 14:32:39.173658    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348147c99d77"
	I1109 14:32:39.206061    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:32:39.206061    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:32:39.241072    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:32:39.241107    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:32:39.277778    1604 logs.go:123] Gathering logs for kube-controller-manager [b0b7529d23ff] ...
	I1109 14:32:39.277778    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0b7529d23ff"
	I1109 14:32:39.322704    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:32:39.322704    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:32:39.351645    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:32:39.351645    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:32:39.412646    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:32:39.412646    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:32:41.936759    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:41.939748    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:41.950298    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:32:41.988339    1604 logs.go:282] 1 containers: [3b408642cd69]
	I1109 14:32:41.994331    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:32:42.023011    1604 logs.go:282] 1 containers: [348147c99d77]
	I1109 14:32:42.031973    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:32:42.068813    1604 logs.go:282] 0 containers: []
	W1109 14:32:42.068813    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:32:42.074807    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:32:42.109807    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:32:42.115821    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:32:42.146812    1604 logs.go:282] 0 containers: []
	W1109 14:32:42.146812    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:32:42.154836    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:32:42.192813    1604 logs.go:282] 2 containers: [25752fe0c294 b0b7529d23ff]
	I1109 14:32:42.199818    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:32:42.232001    1604 logs.go:282] 0 containers: []
	W1109 14:32:42.232001    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:32:42.237988    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:32:42.266988    1604 logs.go:282] 0 containers: []
	W1109 14:32:42.266988    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:32:42.266988    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:32:42.266988    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:32:42.354762    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:32:42.337644    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:42.341811    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:42.343696    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:42.344601    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:42.346670    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:32:42.337644    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:42.341811    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:42.343696    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:42.344601    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:42.346670    3586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:32:42.354762    1604 logs.go:123] Gathering logs for etcd [348147c99d77] ...
	I1109 14:32:42.354762    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348147c99d77"
	I1109 14:32:42.388521    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:32:42.388598    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:32:42.424446    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:32:42.425447    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:32:42.466437    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:32:42.466437    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:32:42.505415    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:32:42.505481    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:32:42.607192    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:32:42.607295    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:32:42.671263    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:32:42.672269    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:32:42.696754    1604 logs.go:123] Gathering logs for kube-apiserver [3b408642cd69] ...
	I1109 14:32:42.696830    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b408642cd69"
	I1109 14:32:42.743262    1604 logs.go:123] Gathering logs for kube-controller-manager [b0b7529d23ff] ...
	I1109 14:32:42.743352    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0b7529d23ff"
	I1109 14:32:42.779668    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:32:42.779668    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:32:45.309885    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:45.312773    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:32:45.318890    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:32:45.351014    1604 logs.go:282] 1 containers: [3b408642cd69]
	I1109 14:32:45.357669    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:32:45.393267    1604 logs.go:282] 1 containers: [348147c99d77]
	I1109 14:32:45.399982    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:32:45.431596    1604 logs.go:282] 0 containers: []
	W1109 14:32:45.431596    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:32:45.438402    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:32:45.474950    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:32:45.481811    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:32:45.518013    1604 logs.go:282] 0 containers: []
	W1109 14:32:45.518013    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:32:45.524026    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:32:45.556713    1604 logs.go:282] 2 containers: [25752fe0c294 b0b7529d23ff]
	I1109 14:32:45.562718    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:32:45.593479    1604 logs.go:282] 0 containers: []
	W1109 14:32:45.593479    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:32:45.600891    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:32:45.634005    1604 logs.go:282] 0 containers: []
	W1109 14:32:45.634005    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:32:45.634005    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:32:45.634005    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:32:45.700995    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:32:45.700995    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:32:45.725002    1604 logs.go:123] Gathering logs for kube-apiserver [3b408642cd69] ...
	I1109 14:32:45.725002    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b408642cd69"
	I1109 14:32:45.764993    1604 logs.go:123] Gathering logs for etcd [348147c99d77] ...
	I1109 14:32:45.764993    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348147c99d77"
	I1109 14:32:45.801015    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:32:45.801091    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:32:45.839780    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:32:45.839871    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:32:45.886962    1604 logs.go:123] Gathering logs for kube-controller-manager [b0b7529d23ff] ...
	I1109 14:32:45.886962    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0b7529d23ff"
	I1109 14:32:45.929302    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:32:45.929302    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:32:46.017174    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:32:46.017174    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:32:46.117419    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:32:46.101238    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:46.101986    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:46.106482    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:46.107665    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:46.108422    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:32:46.101238    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:46.101986    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:46.106482    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:46.107665    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:32:46.108422    3836 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:32:46.117483    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:32:46.117483    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:32:46.156994    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:32:46.156994    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:32:48.687155    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:32:53.687604    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:32:53.695039    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:32:53.729721    1604 logs.go:282] 2 containers: [1d7002c13770 3b408642cd69]
	I1109 14:32:53.735950    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:32:53.768049    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:32:53.774222    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:32:53.804062    1604 logs.go:282] 0 containers: []
	W1109 14:32:53.804062    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:32:53.809810    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:32:53.841128    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:32:53.847649    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:32:53.876039    1604 logs.go:282] 0 containers: []
	W1109 14:32:53.876039    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:32:53.882853    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:32:53.911549    1604 logs.go:282] 2 containers: [25752fe0c294 b0b7529d23ff]
	I1109 14:32:53.917425    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:32:53.956082    1604 logs.go:282] 0 containers: []
	W1109 14:32:53.956137    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:32:53.962920    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:32:54.001634    1604 logs.go:282] 0 containers: []
	W1109 14:32:54.001634    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:32:54.001634    1604 logs.go:123] Gathering logs for kube-apiserver [3b408642cd69] ...
	I1109 14:32:54.001634    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b408642cd69"
	I1109 14:32:54.050382    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:32:54.050382    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:32:54.088417    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:32:54.088417    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:32:54.111291    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:32:54.111401    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:32:54.144482    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:32:54.144482    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:32:54.187187    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:32:54.187187    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:32:54.220159    1604 logs.go:123] Gathering logs for kube-controller-manager [b0b7529d23ff] ...
	I1109 14:32:54.220239    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0b7529d23ff"
	I1109 14:32:54.264380    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:32:54.264380    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:32:54.294913    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:32:54.294963    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:32:54.373169    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:32:54.373169    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:32:54.437954    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:32:54.437954    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1109 14:33:08.467820    1604 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.0297047s)
	W1109 14:33:08.467820    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:04.528448    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": net/http: TLS handshake timeout
	E1109 14:33:08.455196    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:35462->[::1]:8443: read: connection reset by peer
	E1109 14:33:08.456101    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:08.459106    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:08.460319    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:04.528448    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": net/http: TLS handshake timeout
	E1109 14:33:08.455196    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:35462->[::1]:8443: read: connection reset by peer
	E1109 14:33:08.456101    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:08.459106    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:08.460319    4215 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:08.467820    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:08.467820    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:13.336962    1604 ssh_runner.go:235] Completed: /bin/bash -c "docker logs --tail 400 1d7002c13770": (4.8690399s)
	I1109 14:33:15.841756    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:15.844753    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:15.854748    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:15.892960    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:15.901951    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:15.937511    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:15.946509    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:15.982510    1604 logs.go:282] 0 containers: []
	W1109 14:33:15.982510    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:15.989512    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:16.044797    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:16.051384    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:16.086977    1604 logs.go:282] 0 containers: []
	W1109 14:33:16.086977    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:16.093974    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:16.151994    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:16.161993    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:16.219704    1604 logs.go:282] 0 containers: []
	W1109 14:33:16.219704    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:16.225696    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:16.259270    1604 logs.go:282] 0 containers: []
	W1109 14:33:16.259329    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:16.259329    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:16.259375    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:16.352157    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:16.352157    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:16.374667    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:16.374667    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:16.421176    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:16.421176    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:16.467698    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:16.467698    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:16.527693    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:16.527693    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:16.572704    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:16.572704    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:16.633308    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:16.633883    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:16.745172    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:16.737076    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:16.738205    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:16.739403    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:16.740381    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:16.742755    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:16.737076    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:16.738205    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:16.739403    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:16.740381    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:16.742755    4559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:16.745172    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:16.745172    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:16.797853    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:16.797853    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:16.843791    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:16.843791    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:19.446100    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:19.449532    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:19.457240    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:19.494782    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:19.501649    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:19.538988    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:19.545161    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:19.579725    1604 logs.go:282] 0 containers: []
	W1109 14:33:19.579810    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:19.586257    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:19.618757    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:19.624425    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:19.656186    1604 logs.go:282] 0 containers: []
	W1109 14:33:19.656186    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:19.665316    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:19.702581    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:19.710416    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:19.743793    1604 logs.go:282] 0 containers: []
	W1109 14:33:19.743793    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:19.751239    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:19.783856    1604 logs.go:282] 0 containers: []
	W1109 14:33:19.783856    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:19.783856    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:19.783856    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:19.823925    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:19.823925    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:19.860987    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:19.861074    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:19.892335    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:19.892335    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:19.913832    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:19.913866    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:19.950688    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:19.950688    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:19.995647    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:19.995647    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:20.079954    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:20.080007    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:20.160889    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:20.160889    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:20.253154    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:20.243164    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:20.244639    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:20.245661    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:20.247253    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:20.248119    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:20.243164    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:20.244639    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:20.245661    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:20.247253    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:20.248119    4771 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:20.253154    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:20.253154    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:20.295220    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:20.295875    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:22.841755    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:22.844656    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:22.850872    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:22.882912    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:22.890833    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:22.922804    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:22.929079    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:22.959378    1604 logs.go:282] 0 containers: []
	W1109 14:33:22.959378    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:22.966040    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:22.998042    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:23.005125    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:23.034860    1604 logs.go:282] 0 containers: []
	W1109 14:33:23.034860    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:23.041660    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:23.075727    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:23.081666    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:23.114780    1604 logs.go:282] 0 containers: []
	W1109 14:33:23.115774    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:23.121370    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:23.150721    1604 logs.go:282] 0 containers: []
	W1109 14:33:23.150721    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:23.150721    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:23.150721    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:23.180759    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:23.180759    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:23.208938    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:23.209040    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:23.298512    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:23.289496    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:23.290424    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:23.292482    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:23.293604    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:23.294355    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:23.289496    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:23.290424    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:23.292482    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:23.293604    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:23.294355    4873 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:23.298612    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:23.298672    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:23.333026    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:23.333026    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:23.375041    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:23.375041    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:23.422628    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:23.422628    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:23.459560    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:23.459560    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:23.552247    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:23.552247    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:23.627662    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:23.627662    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:23.666228    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:23.666330    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:26.201812    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:26.204222    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:26.210821    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:26.242136    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:26.248797    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:26.279726    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:26.285645    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:26.316576    1604 logs.go:282] 0 containers: []
	W1109 14:33:26.316576    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:26.322543    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:26.353482    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:26.359660    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:26.389838    1604 logs.go:282] 0 containers: []
	W1109 14:33:26.389838    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:26.396500    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:26.431915    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:26.438124    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:26.466694    1604 logs.go:282] 0 containers: []
	W1109 14:33:26.467711    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:26.472691    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:26.502710    1604 logs.go:282] 0 containers: []
	W1109 14:33:26.502710    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:26.502710    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:26.502710    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:26.589570    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:26.590577    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:26.671596    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:26.671596    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:26.766679    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:26.755427    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:26.756829    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:26.757735    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:26.759399    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:26.760231    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:26.755427    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:26.756829    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:26.757735    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:26.759399    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:26.760231    5079 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:26.766679    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:26.766679    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:26.822067    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:26.822067    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:26.860735    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:26.861308    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:26.883469    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:26.883513    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:26.927369    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:26.927369    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:26.959372    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:26.959372    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:27.007793    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:27.008314    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:27.044573    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:27.044573    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:29.571203    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:29.573203    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:29.579203    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:29.614227    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:29.622194    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:29.653503    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:29.659505    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:29.691055    1604 logs.go:282] 0 containers: []
	W1109 14:33:29.691055    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:29.697069    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:29.730643    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:29.740266    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:29.774755    1604 logs.go:282] 0 containers: []
	W1109 14:33:29.774755    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:29.780754    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:29.811300    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:29.818034    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:29.848530    1604 logs.go:282] 0 containers: []
	W1109 14:33:29.848530    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:29.855166    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:29.885782    1604 logs.go:282] 0 containers: []
	W1109 14:33:29.885782    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:29.885782    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:29.885782    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:29.909498    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:29.909572    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:29.999729    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:29.988090    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:29.989271    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:29.990634    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:29.991589    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:29.995958    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:29.988090    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:29.989271    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:29.990634    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:29.991589    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:29.995958    5235 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:29.999729    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:29.999729    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:30.035219    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:30.035288    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:30.092445    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:30.092445    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:30.139646    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:30.139646    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:30.175652    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:30.175652    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:30.214636    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:30.214711    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:30.294798    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:30.294798    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:30.341943    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:30.342041    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:30.375217    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:30.375217    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:32.960088    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:32.963022    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:32.969242    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:32.999166    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:33.004956    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:33.035918    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:33.041808    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:33.072238    1604 logs.go:282] 0 containers: []
	W1109 14:33:33.072238    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:33.078807    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:33.110062    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:33.115916    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:33.145316    1604 logs.go:282] 0 containers: []
	W1109 14:33:33.145399    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:33.151516    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:33.179113    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:33.184924    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:33.216283    1604 logs.go:282] 0 containers: []
	W1109 14:33:33.216283    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:33.222441    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:33.251509    1604 logs.go:282] 0 containers: []
	W1109 14:33:33.251509    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:33.251509    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:33.251509    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:33.286055    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:33.286055    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:33.316499    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:33.316499    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:33.399037    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:33.399037    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:33.480538    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:33.480538    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:33.505368    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:33.505368    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:33.594714    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:33.583629    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:33.585212    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:33.586444    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:33.588418    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:33.590237    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:33.583629    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:33.585212    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:33.586444    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:33.588418    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:33.590237    5447 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:33.594789    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:33.594825    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:33.634137    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:33.634197    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:33.668491    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:33.668570    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:33.717872    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:33.717872    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:33.760743    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:33.760743    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:36.310422    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:36.313027    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:36.320621    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:36.353928    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:36.363882    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:36.398091    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:36.403512    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:36.436324    1604 logs.go:282] 0 containers: []
	W1109 14:33:36.436324    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:36.446352    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:36.481149    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:36.487601    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:36.517015    1604 logs.go:282] 0 containers: []
	W1109 14:33:36.517015    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:36.526275    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:36.561929    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:36.568334    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:36.596849    1604 logs.go:282] 0 containers: []
	W1109 14:33:36.596908    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:36.602821    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:36.632322    1604 logs.go:282] 0 containers: []
	W1109 14:33:36.632322    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:36.632322    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:36.632322    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:36.670962    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:36.670962    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:36.708556    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:36.708556    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:36.760093    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:36.760093    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:36.793097    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:36.793097    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:36.820094    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:36.820094    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:36.905548    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:36.905548    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:36.997336    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:36.997336    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:37.019340    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:37.019340    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:37.126272    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:37.113481    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:37.115902    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:37.117020    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:37.118241    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:37.119169    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:37.113481    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:37.115902    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:37.117020    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:37.118241    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:37.119169    5649 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:37.126272    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:37.126272    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:37.166269    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:37.166269    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:39.723487    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:39.727167    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:39.733551    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:39.766807    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:39.773962    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:39.804638    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:39.810817    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:39.839780    1604 logs.go:282] 0 containers: []
	W1109 14:33:39.839780    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:39.847178    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:39.879379    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:39.885515    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:39.916996    1604 logs.go:282] 0 containers: []
	W1109 14:33:39.916996    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:39.923136    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:39.954730    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:39.961727    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:39.992895    1604 logs.go:282] 0 containers: []
	W1109 14:33:39.992895    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:40.000087    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:40.031195    1604 logs.go:282] 0 containers: []
	W1109 14:33:40.031195    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:40.031195    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:40.031195    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:40.117212    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:40.117212    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:40.139196    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:40.139196    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:40.230055    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:40.216652    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:40.217487    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:40.222906    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:40.224143    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:40.225172    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:40.216652    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:40.217487    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:40.222906    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:40.224143    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:40.225172    5750 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:40.230055    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:40.230055    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:40.273037    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:40.273070    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:40.324051    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:40.324051    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:40.360365    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:40.360365    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:40.388371    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:40.388371    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:40.474158    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:40.474158    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:40.511765    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:40.511850    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:40.554974    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:40.554974    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:43.092899    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:43.095240    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:43.101709    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:43.143710    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:43.151426    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:43.186259    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:43.192084    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:43.229894    1604 logs.go:282] 0 containers: []
	W1109 14:33:43.229894    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:43.237400    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:43.272163    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:43.278316    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:43.307399    1604 logs.go:282] 0 containers: []
	W1109 14:33:43.307399    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:43.317070    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:43.358240    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:43.364376    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:43.393198    1604 logs.go:282] 0 containers: []
	W1109 14:33:43.393198    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:43.403737    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:43.437777    1604 logs.go:282] 0 containers: []
	W1109 14:33:43.437777    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:43.437777    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:43.437777    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:43.521928    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:43.521928    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:43.571274    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:43.571311    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:43.627864    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:43.627864    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:43.677165    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:43.677165    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:43.712862    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:43.712862    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:43.804080    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:43.804145    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:43.830253    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:43.830253    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:43.927649    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:43.911248    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:43.913737    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:43.915389    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:43.919119    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:43.921176    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:43.911248    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:43.913737    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:43.915389    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:43.919119    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:43.921176    5996 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:43.927649    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:43.927649    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:43.976066    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:43.976066    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:44.009419    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:44.010326    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:46.545738    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:46.550363    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:46.556798    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:46.591906    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:46.598320    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:46.630324    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:46.637448    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:46.670372    1604 logs.go:282] 0 containers: []
	W1109 14:33:46.670491    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:46.677443    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:46.709167    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:46.716739    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:46.749065    1604 logs.go:282] 0 containers: []
	W1109 14:33:46.749109    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:46.755677    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:46.786471    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:46.794038    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:46.822810    1604 logs.go:282] 0 containers: []
	W1109 14:33:46.822810    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:46.830360    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:46.867043    1604 logs.go:282] 0 containers: []
	W1109 14:33:46.867086    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:46.867086    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:46.867086    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:46.902386    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:46.902421    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:46.949387    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:46.949387    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:46.988346    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:46.988346    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:47.023795    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:47.023829    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:47.109682    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:47.109682    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:47.192622    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:47.192622    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:47.218856    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:47.218856    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:47.262064    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:47.262064    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:47.316379    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:47.316379    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:47.350656    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:47.350656    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:47.446361    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:47.432880    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:47.434645    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:47.438035    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:47.441246    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:47.442237    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:47.432880    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:47.434645    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:47.438035    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:47.441246    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:47.442237    6187 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:49.946881    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:49.955913    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:49.964964    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:50.005382    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:50.011381    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:50.045385    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:50.051383    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:50.084760    1604 logs.go:282] 0 containers: []
	W1109 14:33:50.084823    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:50.093870    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:50.126181    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:50.136826    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:50.184413    1604 logs.go:282] 0 containers: []
	W1109 14:33:50.184458    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:50.190767    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:50.225350    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:50.232354    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:50.270925    1604 logs.go:282] 0 containers: []
	W1109 14:33:50.270925    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:50.280171    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:50.317006    1604 logs.go:282] 0 containers: []
	W1109 14:33:50.317006    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:50.317006    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:50.317006    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:50.411680    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:50.411680    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:50.505333    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:50.493056    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:50.494051    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:50.500367    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:50.501661    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:50.502644    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:50.493056    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:50.494051    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:50.500367    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:50.501661    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:50.502644    6285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:50.505333    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:50.505333    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:50.553901    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:50.553901    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:50.596855    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:50.596894    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:50.633760    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:50.633760    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:50.665827    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:50.665865    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:50.699860    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:50.699949    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:50.756792    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:50.756792    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:50.804871    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:50.804871    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:50.840340    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:50.840886    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:53.441872    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:53.443865    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:53.454286    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:53.483981    1604 logs.go:282] 1 containers: [1d7002c13770]
	I1109 14:33:53.489137    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:53.524219    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:53.531266    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:53.562544    1604 logs.go:282] 0 containers: []
	W1109 14:33:53.562544    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:53.567806    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:53.601733    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:53.608615    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:53.641871    1604 logs.go:282] 0 containers: []
	W1109 14:33:53.641871    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:53.647871    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:53.679217    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:53.685240    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:53.717142    1604 logs.go:282] 0 containers: []
	W1109 14:33:53.717142    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:53.726639    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:53.763271    1604 logs.go:282] 0 containers: []
	W1109 14:33:53.763271    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:53.763271    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:33:53.763271    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:33:53.809382    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:33:53.809382    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:33:53.849372    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:33:53.849372    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:33:53.885372    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:53.885372    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:53.919874    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:53.919874    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:54.008346    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:33:54.008346    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	I1109 14:33:54.058528    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:54.058528    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:54.156042    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:54.156080    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:54.178530    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:54.178530    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:33:54.260091    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:33:54.253740    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:54.254708    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:54.256085    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:54.257020    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:54.258297    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:33:54.253740    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:54.254708    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:54.256085    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:54.257020    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:33:54.258297    6517 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:33:54.260091    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:33:54.261095    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:33:54.297703    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:33:54.297703    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:33:56.861090    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:33:56.863953    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:33:56.870722    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:33:56.902838    1604 logs.go:282] 2 containers: [578d9ee8b5ed 1d7002c13770]
	I1109 14:33:56.910020    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:33:56.943599    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:33:56.950729    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:33:56.981727    1604 logs.go:282] 0 containers: []
	W1109 14:33:56.981727    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:33:56.986732    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:33:57.040762    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:33:57.046834    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:33:57.085111    1604 logs.go:282] 0 containers: []
	W1109 14:33:57.085196    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:33:57.091148    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:33:57.134903    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:33:57.140900    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:33:57.176614    1604 logs.go:282] 0 containers: []
	W1109 14:33:57.176614    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:33:57.182603    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:33:57.231235    1604 logs.go:282] 0 containers: []
	W1109 14:33:57.231235    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:33:57.231235    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:33:57.231235    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:33:57.258444    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:33:57.258444    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:33:57.355691    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:33:57.355691    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:33:57.443643    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:33:57.443643    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:33:57.469093    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:33:57.469208    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1109 14:34:18.864555    1604 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.3951009s)
	W1109 14:34:18.864555    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:07.553384    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": net/http: TLS handshake timeout
	E1109 14:34:17.556263    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": net/http: TLS handshake timeout
	E1109 14:34:18.856725    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:57360->[::1]:8443: read: connection reset by peer
	E1109 14:34:18.858238    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:18.859227    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:07.553384    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": net/http: TLS handshake timeout
	E1109 14:34:17.556263    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": net/http: TLS handshake timeout
	E1109 14:34:18.856725    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:57360->[::1]:8443: read: connection reset by peer
	E1109 14:34:18.858238    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:18.859227    6731 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:18.864555    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:18.864555    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:18.913520    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:34:18.913520    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:34:18.953761    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:18.953761    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:19.009752    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:19.009752    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:19.059771    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:19.059771    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:19.128448    1604 logs.go:123] Gathering logs for kube-apiserver [1d7002c13770] ...
	I1109 14:34:19.128518    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7002c13770"
	W1109 14:34:19.166421    1604 logs.go:130] failed kube-apiserver [1d7002c13770]: command: /bin/bash -c "docker logs --tail 400 1d7002c13770" /bin/bash -c "docker logs --tail 400 1d7002c13770": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 1d7002c13770
	 output: 
	** stderr ** 
	Error response from daemon: No such container: 1d7002c13770
	
	** /stderr **
	I1109 14:34:19.166473    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:34:19.166473    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:34:21.710452    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:21.713835    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:21.722688    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:21.758808    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:21.765308    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:21.795108    1604 logs.go:282] 1 containers: [c23c494c3744]
	I1109 14:34:21.804354    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:21.833579    1604 logs.go:282] 0 containers: []
	W1109 14:34:21.833579    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:21.841245    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:21.878183    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:21.884697    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:21.918718    1604 logs.go:282] 0 containers: []
	W1109 14:34:21.918718    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:21.925576    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:21.965332    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:34:21.971925    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:22.005246    1604 logs.go:282] 0 containers: []
	W1109 14:34:22.005292    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:22.011838    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:22.043083    1604 logs.go:282] 0 containers: []
	W1109 14:34:22.043083    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:22.043083    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:22.043083    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:22.073713    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:22.073775    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:22.207401    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:22.197031    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:22.197969    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:22.199756    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:22.201604    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:22.203110    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:22.197031    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:22.197969    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:22.199756    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:22.201604    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:22.203110    6992 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:22.207401    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:22.207401    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:22.249381    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:22.249381    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:22.310157    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:22.310157    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:22.356599    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:34:22.356599    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:34:22.395604    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:22.395604    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:22.478729    1604 logs.go:123] Gathering logs for etcd [c23c494c3744] ...
	I1109 14:34:22.478729    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23c494c3744"
	I1109 14:34:22.520866    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:22.520866    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:22.572775    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:22.572775    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:22.605444    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:22.605444    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:25.201868    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:25.204927    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:25.212169    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:25.248999    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:25.255715    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:25.287375    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:25.294571    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:25.323437    1604 logs.go:282] 0 containers: []
	W1109 14:34:25.323437    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:25.330082    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:25.362101    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:25.368203    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:25.397324    1604 logs.go:282] 0 containers: []
	W1109 14:34:25.397324    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:25.403325    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:25.434037    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:34:25.440099    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:25.469946    1604 logs.go:282] 0 containers: []
	W1109 14:34:25.469946    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:25.476749    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:25.508022    1604 logs.go:282] 0 containers: []
	W1109 14:34:25.508022    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:25.508022    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:25.508022    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:25.592489    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:25.592489    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:25.685843    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:25.685843    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:25.710741    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:25.710741    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:25.747816    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:25.747816    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:25.810458    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:25.810458    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:25.853224    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:25.853224    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:25.889842    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:25.889842    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:25.921248    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:25.921248    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:26.008625    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:25.998620    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:25.999509    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:26.001602    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:26.002490    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:26.005120    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:25.998620    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:25.999509    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:26.001602    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:26.002490    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:26.005120    7322 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:26.008625    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:26.008625    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:26.053911    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:34:26.053911    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:34:28.590186    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:28.592743    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:28.598845    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:28.630993    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:28.637100    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:28.667725    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:28.673498    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:28.706753    1604 logs.go:282] 0 containers: []
	W1109 14:34:28.706753    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:28.712933    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:28.743281    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:28.748961    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:28.784914    1604 logs.go:282] 0 containers: []
	W1109 14:34:28.784914    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:28.791530    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:28.821194    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:34:28.827428    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:28.858766    1604 logs.go:282] 0 containers: []
	W1109 14:34:28.858766    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:28.867469    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:28.897804    1604 logs.go:282] 0 containers: []
	W1109 14:34:28.897804    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:28.897804    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:28.897804    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:28.927766    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:28.927766    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:29.007120    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:29.007640    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:29.098114    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:29.089754    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:29.091204    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:29.092161    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:29.093347    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:29.095190    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:29.089754    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:29.091204    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:29.092161    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:29.093347    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:29.095190    7453 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:29.098114    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:29.098114    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:29.132704    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:29.132704    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:29.171199    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:29.171199    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:29.265562    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:29.266560    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:29.290825    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:29.290825    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:29.331576    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:29.331576    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:29.394807    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:29.395803    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:29.444461    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:34:29.444461    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:34:31.981617    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:31.984436    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:31.990760    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:32.026471    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:32.038695    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:32.072141    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:32.080790    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:32.111004    1604 logs.go:282] 0 containers: []
	W1109 14:34:32.111004    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:32.117196    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:32.150370    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:32.156942    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:32.188544    1604 logs.go:282] 0 containers: []
	W1109 14:34:32.188544    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:32.195022    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:32.228524    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:34:32.238683    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:32.270513    1604 logs.go:282] 0 containers: []
	W1109 14:34:32.270513    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:32.276651    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:32.307178    1604 logs.go:282] 0 containers: []
	W1109 14:34:32.307178    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:32.307178    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:32.307178    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:32.399640    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:32.399640    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:32.424837    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:32.424892    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:32.511238    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:32.502759    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:32.503620    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:32.504740    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:32.505526    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:32.507912    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:32.502759    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:32.503620    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:32.504740    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:32.505526    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:32.507912    7614 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:32.511238    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:32.511238    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:32.549551    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:32.549601    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:32.583850    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:32.583850    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:32.616826    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:32.616826    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:32.696116    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:32.696116    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:32.762800    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:32.762800    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:32.809522    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:34:32.809522    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:34:32.846881    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:32.846942    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:35.376439    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:35.379303    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:35.384928    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:35.416008    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:35.422164    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:35.452764    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:35.458711    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:35.488167    1604 logs.go:282] 0 containers: []
	W1109 14:34:35.488204    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:35.494313    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:35.529173    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:35.535221    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:35.564725    1604 logs.go:282] 0 containers: []
	W1109 14:34:35.564813    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:35.570445    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:35.602808    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:34:35.608794    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:35.639427    1604 logs.go:282] 0 containers: []
	W1109 14:34:35.639458    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:35.645364    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:35.676424    1604 logs.go:282] 0 containers: []
	W1109 14:34:35.676520    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:35.676552    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:35.676552    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:35.710407    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:35.710407    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:35.806084    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:35.806084    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:35.848128    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:35.848165    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:36.106794    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:34:36.106794    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:34:36.143663    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:36.143663    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:36.227755    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:36.227755    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:36.249452    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:36.249984    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:36.335584    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:36.323372    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:36.324299    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:36.327373    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:36.328709    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:36.329829    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:36.323372    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:36.324299    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:36.327373    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:36.328709    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:36.329829    7850 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:36.335584    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:36.335584    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:36.372527    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:36.372606    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:36.418191    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:36.418191    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:38.954614    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:38.957315    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:38.963331    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:38.993017    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:38.998817    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:39.029813    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:39.035916    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:39.065932    1604 logs.go:282] 0 containers: []
	W1109 14:34:39.065932    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:39.073731    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:39.108532    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:39.115272    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:39.148075    1604 logs.go:282] 0 containers: []
	W1109 14:34:39.148075    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:39.155086    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:39.185131    1604 logs.go:282] 2 containers: [f91f678ad8d3 25752fe0c294]
	I1109 14:34:39.191269    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:39.226671    1604 logs.go:282] 0 containers: []
	W1109 14:34:39.226671    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:39.234589    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:39.267719    1604 logs.go:282] 0 containers: []
	W1109 14:34:39.267719    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:39.267719    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:39.267719    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:39.304467    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:39.304569    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:39.353955    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:39.353955    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:39.383323    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:39.383323    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:39.463049    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:39.463049    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:39.486133    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:39.486186    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:39.571418    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:39.560366    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:39.561486    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:39.564723    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:39.565274    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:39.567592    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:39.560366    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:39.561486    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:39.564723    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:39.565274    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:39.567592    8018 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:39.571418    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:39.571418    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:39.636337    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:39.636337    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:39.822130    1604 logs.go:123] Gathering logs for kube-controller-manager [25752fe0c294] ...
	I1109 14:34:39.822213    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25752fe0c294"
	I1109 14:34:39.873450    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:39.873450    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:40.006864    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:40.006864    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:42.555174    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:42.558873    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:42.564683    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:42.598191    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:42.604518    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:42.635816    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:42.643810    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:42.676862    1604 logs.go:282] 0 containers: []
	W1109 14:34:42.676862    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:42.684670    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:42.716466    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:42.722458    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:42.754629    1604 logs.go:282] 0 containers: []
	W1109 14:34:42.754629    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:42.763140    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:42.797156    1604 logs.go:282] 1 containers: [f91f678ad8d3]
	I1109 14:34:42.807203    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:42.838709    1604 logs.go:282] 0 containers: []
	W1109 14:34:42.838709    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:42.844706    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:42.879654    1604 logs.go:282] 0 containers: []
	W1109 14:34:42.879654    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:42.879654    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:42.879654    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:42.903161    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:42.903232    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:42.942671    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:42.942782    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:43.016597    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:43.016597    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:43.065476    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:43.065476    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:43.097435    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:43.097435    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:43.132122    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:43.132122    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:43.262108    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:43.263108    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:43.361799    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:43.349828    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:43.350817    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:43.353111    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:43.354156    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:43.356773    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:43.349828    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:43.350817    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:43.353111    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:43.354156    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:43.356773    8204 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:43.361799    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:43.361799    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:43.398019    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:43.398019    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:45.994726    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:45.997162    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:46.003865    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:46.045151    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:46.053998    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:46.097357    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:46.105322    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:46.136167    1604 logs.go:282] 0 containers: []
	W1109 14:34:46.136167    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:46.141643    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:46.173490    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:46.181549    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:46.210124    1604 logs.go:282] 0 containers: []
	W1109 14:34:46.210206    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:46.216530    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:46.251349    1604 logs.go:282] 1 containers: [f91f678ad8d3]
	I1109 14:34:46.259251    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:46.293251    1604 logs.go:282] 0 containers: []
	W1109 14:34:46.293302    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:46.300185    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:46.338130    1604 logs.go:282] 0 containers: []
	W1109 14:34:46.338130    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:46.338191    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:46.338191    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:46.440432    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:46.429783    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:46.430768    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:46.431969    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:46.433941    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:46.436022    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:46.429783    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:46.430768    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:46.431969    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:46.433941    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:46.436022    8337 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:46.440432    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:46.440432    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:46.480108    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:46.480184    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:46.519515    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:46.519608    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:46.587971    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:46.587971    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:46.689642    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:46.689642    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:46.735530    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:46.735530    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:46.773957    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:46.774026    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:46.803267    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:46.803267    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:46.884264    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:46.884293    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:49.410318    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:49.413818    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:49.420237    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:49.452216    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:49.458731    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:49.491788    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:49.498238    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:49.529271    1604 logs.go:282] 0 containers: []
	W1109 14:34:49.529271    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:49.535231    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:49.573458    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:49.579819    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:49.616669    1604 logs.go:282] 0 containers: []
	W1109 14:34:49.616717    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:49.623069    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:49.654333    1604 logs.go:282] 1 containers: [f91f678ad8d3]
	I1109 14:34:49.660380    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:49.690183    1604 logs.go:282] 0 containers: []
	W1109 14:34:49.690183    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:49.699139    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:49.731916    1604 logs.go:282] 0 containers: []
	W1109 14:34:49.731916    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:49.731916    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:49.731916    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:49.835056    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:49.823751    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:49.824833    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:49.825691    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:49.828532    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:49.829304    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:49.823751    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:49.824833    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:49.825691    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:49.828532    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:49.829304    8497 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:49.835056    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:49.835056    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:49.880766    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:49.880823    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:49.944755    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:49.944755    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:49.992117    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:49.992117    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:50.034710    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:50.034766    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:50.060166    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:50.060202    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:50.097371    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:50.097476    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:50.126843    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:50.126843    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:50.210229    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:50.210275    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:52.805910    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:52.808773    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:52.815160    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:52.847911    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:52.853781    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:52.887978    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:52.894184    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:52.923965    1604 logs.go:282] 0 containers: []
	W1109 14:34:52.923965    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:52.931883    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:52.964950    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:52.971321    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:53.000095    1604 logs.go:282] 0 containers: []
	W1109 14:34:53.000143    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:53.006773    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:53.036566    1604 logs.go:282] 1 containers: [f91f678ad8d3]
	I1109 14:34:53.043016    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:53.072956    1604 logs.go:282] 0 containers: []
	W1109 14:34:53.072956    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:53.077960    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:53.107059    1604 logs.go:282] 0 containers: []
	W1109 14:34:53.107097    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:53.107124    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:53.107124    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:53.184627    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:53.184627    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:53.221626    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:53.221626    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:53.253850    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:53.253921    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:53.321871    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:53.321871    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:53.354543    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:53.354543    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:53.455318    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:53.455318    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:53.479776    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:53.479811    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:53.570103    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:53.559764    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:53.561058    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:53.562518    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:53.563953    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:53.565273    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:53.559764    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:53.561058    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:53.562518    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:53.563953    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:53.565273    8726 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:53.570103    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:53.570103    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:53.614580    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:53.614580    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:56.147485    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:56.150497    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:56.157079    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:56.185827    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:56.191823    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:56.221316    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:56.227046    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:56.258695    1604 logs.go:282] 0 containers: []
	W1109 14:34:56.258695    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:56.263684    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:56.294062    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:56.300163    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:56.330085    1604 logs.go:282] 0 containers: []
	W1109 14:34:56.330085    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:56.336560    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:56.369608    1604 logs.go:282] 1 containers: [f91f678ad8d3]
	I1109 14:34:56.376619    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:56.405604    1604 logs.go:282] 0 containers: []
	W1109 14:34:56.405604    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:56.410611    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:56.443602    1604 logs.go:282] 0 containers: []
	W1109 14:34:56.443602    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:56.443602    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:56.443602    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:34:56.545266    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:34:56.545266    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:34:56.569282    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:34:56.570275    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:34:56.658932    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:34:56.651160    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:56.652300    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:56.653504    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:56.654406    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:56.656548    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:34:56.651160    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:56.652300    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:56.653504    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:56.654406    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:34:56.656548    8840 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:34:56.658932    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:34:56.658932    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:34:56.735928    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:56.735928    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:56.773434    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:34:56.773434    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:34:56.810189    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:56.810302    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:56.876201    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:56.876201    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:56.920130    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:56.920130    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:56.955439    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:34:56.955479    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:34:59.487171    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:34:59.490408    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:34:59.496738    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:34:59.527822    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:34:59.534751    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:34:59.566740    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:34:59.572744    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:34:59.605511    1604 logs.go:282] 0 containers: []
	W1109 14:34:59.605548    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:34:59.611640    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:34:59.643373    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:34:59.649491    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:34:59.679040    1604 logs.go:282] 0 containers: []
	W1109 14:34:59.679040    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:34:59.684968    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:34:59.716009    1604 logs.go:282] 1 containers: [f91f678ad8d3]
	I1109 14:34:59.722958    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:34:59.757122    1604 logs.go:282] 0 containers: []
	W1109 14:34:59.757122    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:34:59.764010    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:34:59.797362    1604 logs.go:282] 0 containers: []
	W1109 14:34:59.797362    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:34:59.797362    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:34:59.797362    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:34:59.840864    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:34:59.840864    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:34:59.909582    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:34:59.909582    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:34:59.957724    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:34:59.957724    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:34:59.996232    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:34:59.996284    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:00.095242    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:00.095242    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:00.185283    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:00.174699    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:00.176329    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:00.178476    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:00.180313    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:00.182464    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:00.174699    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:00.176329    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:00.178476    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:00.180313    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:00.182464    9031 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:00.185283    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:00.185283    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:00.219296    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:00.219368    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:00.252618    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:00.252618    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:00.342626    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:00.342626    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:02.866695    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:02.870111    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:02.876414    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:02.909592    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:02.915808    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:02.946560    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:02.953352    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:02.988812    1604 logs.go:282] 0 containers: []
	W1109 14:35:02.988812    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:02.997811    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:03.027798    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:03.033791    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:03.068408    1604 logs.go:282] 0 containers: []
	W1109 14:35:03.069421    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:03.074409    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:03.121424    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:03.128418    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:03.160146    1604 logs.go:282] 0 containers: []
	W1109 14:35:03.160146    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:03.167638    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:03.219839    1604 logs.go:282] 0 containers: []
	W1109 14:35:03.219839    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:03.219839    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:03.219839    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:03.325398    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:03.326391    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:03.354368    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:03.354392    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:03.408860    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:03.409023    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:03.453457    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:03.453457    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:03.566010    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:03.566010    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:03.685528    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:03.677293    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:03.679174    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:03.680343    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:03.681274    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:03.682335    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:03.677293    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:03.679174    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:03.680343    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:03.681274    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:03.682335    9285 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:03.685624    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:03.685624    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:03.739796    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:03.739796    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:03.775681    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:03.775681    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:03.863689    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:03.863689    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:03.897795    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:03.897880    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:06.430464    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:06.433439    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:06.440286    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:06.476602    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:06.483007    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:06.514380    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:06.523506    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:06.556829    1604 logs.go:282] 0 containers: []
	W1109 14:35:06.556829    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:06.562793    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:06.594633    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:06.600411    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:06.632341    1604 logs.go:282] 0 containers: []
	W1109 14:35:06.632341    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:06.637979    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:06.667466    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:06.673663    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:06.701987    1604 logs.go:282] 0 containers: []
	W1109 14:35:06.701987    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:06.710979    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:06.740020    1604 logs.go:282] 0 containers: []
	W1109 14:35:06.740020    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:06.740020    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:06.740020    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:06.764209    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:06.764209    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:06.848848    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:06.837753    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:06.838952    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:06.840616    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:06.841848    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:06.843016    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:06.837753    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:06.838952    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:06.840616    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:06.841848    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:06.843016    9419 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:06.848848    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:06.848848    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:06.886847    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:06.887381    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:06.938937    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:06.938937    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:06.976613    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:06.976790    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:07.009845    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:07.009845    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:07.096956    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:07.097016    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:07.196896    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:07.196896    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:07.242920    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:07.242984    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:07.318165    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:07.318165    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:09.855283    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:09.858428    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:09.864628    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:09.893983    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:09.900408    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:09.933299    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:09.939572    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:09.969368    1604 logs.go:282] 0 containers: []
	W1109 14:35:09.969368    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:09.976721    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:10.010139    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:10.016573    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:10.045524    1604 logs.go:282] 0 containers: []
	W1109 14:35:10.045524    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:10.050885    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:10.082211    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:10.088636    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:10.119082    1604 logs.go:282] 0 containers: []
	W1109 14:35:10.119131    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:10.125614    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:10.155468    1604 logs.go:282] 0 containers: []
	W1109 14:35:10.155468    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:10.155468    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:10.155468    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:10.249670    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:10.249670    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:10.284174    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:10.284174    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:10.387780    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:10.387780    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:10.415289    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:10.415343    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:10.469807    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:10.469807    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:10.505556    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:10.506078    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:10.546815    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:10.546815    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:10.632808    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:10.632808    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:10.725302    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:10.710614    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:10.711487    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:10.714080    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:10.715108    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:10.715895    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:10.710614    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:10.711487    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:10.714080    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:10.715108    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:10.715895    9673 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:10.725302    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:10.725302    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:10.769361    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:10.769361    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:13.306976    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:13.310447    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:13.316883    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:13.352683    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:13.360312    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:13.402506    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:13.408507    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:13.443993    1604 logs.go:282] 0 containers: []
	W1109 14:35:13.444996    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:13.451000    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:13.481996    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:13.487989    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:13.519990    1604 logs.go:282] 0 containers: []
	W1109 14:35:13.519990    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:13.528997    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:13.564999    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:13.572000    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:13.601998    1604 logs.go:282] 0 containers: []
	W1109 14:35:13.601998    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:13.607992    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:13.638021    1604 logs.go:282] 0 containers: []
	W1109 14:35:13.638021    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:13.638021    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:13.638021    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:13.684993    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:13.684993    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:13.717994    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:13.717994    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:13.752993    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:13.752993    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:13.782995    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:13.782995    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:13.869053    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:13.869103    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:13.972150    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:13.972150    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:14.001667    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:14.001667    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:14.101390    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:14.089730    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:14.090715    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:14.092570    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:14.094015    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:14.095065    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:14.089730    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:14.090715    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:14.092570    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:14.094015    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:14.095065    9832 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:14.101390    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:14.101390    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:14.146229    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:14.146229    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:14.181220    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:14.181220    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:16.753864    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:16.756950    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:16.762993    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:16.795632    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:16.801945    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:16.831132    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:16.837122    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:16.866865    1604 logs.go:282] 0 containers: []
	W1109 14:35:16.866865    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:16.873068    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:16.905512    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:16.912327    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:16.940254    1604 logs.go:282] 0 containers: []
	W1109 14:35:16.940254    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:16.947427    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:16.976236    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:16.982430    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:17.011133    1604 logs.go:282] 0 containers: []
	W1109 14:35:17.011133    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:17.018691    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:17.047721    1604 logs.go:282] 0 containers: []
	W1109 14:35:17.047721    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:17.047721    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:17.047721    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:17.068715    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:17.068715    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:17.152721    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:17.142282    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:17.143252    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:17.146014    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:17.147489    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:17.148491    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:17.142282    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:17.143252    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:17.146014    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:17.147489    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:17.148491    9947 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:17.152721    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:17.152721    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:17.190716    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:17.190716    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:17.264056    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:17.264056    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:17.499877    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:17.499877    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:17.604110    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:17.604110    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:17.649130    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:17.649130    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:17.690132    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:17.690132    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:17.723136    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:17.723136    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:17.757295    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:17.757365    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:20.294336    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:20.297587    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:20.306205    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:20.340733    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:20.346830    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:20.380334    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:20.386624    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:20.424238    1604 logs.go:282] 0 containers: []
	W1109 14:35:20.424238    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:20.431178    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:20.466903    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:20.472658    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:20.503146    1604 logs.go:282] 0 containers: []
	W1109 14:35:20.503146    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:20.509540    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:20.539755    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:20.546043    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:20.574839    1604 logs.go:282] 0 containers: []
	W1109 14:35:20.574866    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:20.580952    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:20.611623    1604 logs.go:282] 0 containers: []
	W1109 14:35:20.611623    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:20.611623    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:20.611623    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:20.653305    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:20.653305    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:20.688035    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:20.688035    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:20.762155    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:20.762155    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:20.823804    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:20.823804    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:20.860640    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:20.860640    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:20.946403    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:20.946478    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:20.968460    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:20.968460    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:21.010622    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:21.010622    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:21.039420    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:21.039420    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:21.142054    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:21.142054    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:21.232357    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:21.221307   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:21.222386   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:21.224494   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:21.225695   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:21.226539   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:21.221307   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:21.222386   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:21.224494   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:21.225695   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:21.226539   10220 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:23.732811    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:23.735803    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:23.747192    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:23.780750    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:23.787523    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:23.818714    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:23.826466    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:23.860716    1604 logs.go:282] 0 containers: []
	W1109 14:35:23.860716    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:23.867133    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:23.897311    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:23.903572    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:23.932446    1604 logs.go:282] 0 containers: []
	W1109 14:35:23.932446    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:23.939112    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:23.969897    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:23.976203    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:24.005776    1604 logs.go:282] 0 containers: []
	W1109 14:35:24.005776    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:24.010770    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:24.043823    1604 logs.go:282] 0 containers: []
	W1109 14:35:24.043823    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:24.043823    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:24.043823    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:24.128440    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:24.128980    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:24.228960    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:24.229022    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:24.251145    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:24.251145    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:24.340492    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:24.328489   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:24.329171   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:24.331939   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:24.334095   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:24.335022   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:24.328489   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:24.329171   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:24.331939   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:24.334095   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:24.335022   10331 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:24.340492    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:24.340492    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:24.417518    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:24.417518    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:24.451177    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:24.451177    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:24.485971    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:24.485971    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:24.517348    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:24.517348    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:24.556253    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:24.556328    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:24.594355    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:24.594410    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:27.138563    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:27.140999    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:27.146967    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:27.176710    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:27.184437    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:27.216534    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:27.224042    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:27.254406    1604 logs.go:282] 0 containers: []
	W1109 14:35:27.254406    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:27.261459    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:27.293455    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:27.301973    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:27.337795    1604 logs.go:282] 0 containers: []
	W1109 14:35:27.337870    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:27.345540    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:27.379702    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:27.386005    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:27.418295    1604 logs.go:282] 0 containers: []
	W1109 14:35:27.418355    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:27.425637    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:27.455575    1604 logs.go:282] 0 containers: []
	W1109 14:35:27.455575    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:27.455648    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:27.455648    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:27.547039    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:27.547039    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:27.571493    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:27.571566    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:27.607653    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:27.607653    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:27.685528    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:27.685528    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:27.731327    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:27.731899    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:27.771264    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:27.771264    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:27.806641    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:27.806641    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:27.837167    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:27.837212    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:27.927930    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:27.915909   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:27.916952   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:27.917896   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:27.921692   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:27.922513   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:27.915909   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:27.916952   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:27.917896   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:27.921692   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:27.922513   10520 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:27.927930    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:27.927930    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:27.969339    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:27.969411    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:30.553483    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:30.556400    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:30.562909    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 14:35:30.605235    1604 logs.go:282] 1 containers: [578d9ee8b5ed]
	I1109 14:35:30.611544    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 14:35:30.642324    1604 logs.go:282] 1 containers: [ee441d6c799c]
	I1109 14:35:30.648942    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 14:35:30.680769    1604 logs.go:282] 0 containers: []
	W1109 14:35:30.680816    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:35:30.687429    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 14:35:30.719792    1604 logs.go:282] 2 containers: [c1fa8e945752 dcfa57dcafaf]
	I1109 14:35:30.725769    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 14:35:30.754039    1604 logs.go:282] 0 containers: []
	W1109 14:35:30.754073    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:35:30.760279    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 14:35:30.790873    1604 logs.go:282] 2 containers: [bbba73f9daf6 f91f678ad8d3]
	I1109 14:35:30.799103    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1109 14:35:30.831449    1604 logs.go:282] 0 containers: []
	W1109 14:35:30.831449    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:35:30.838202    1604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 14:35:30.871045    1604 logs.go:282] 0 containers: []
	W1109 14:35:30.871045    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:35:30.871045    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:35:30.871045    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:35:30.962674    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:35:30.951368   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:30.952126   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:30.955887   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:30.957247   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:30.958309   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:35:30.951368   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:30.952126   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:30.955887   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:30.957247   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:35:30.958309   10657 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:35:30.962674    1604 logs.go:123] Gathering logs for etcd [ee441d6c799c] ...
	I1109 14:35:30.962725    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee441d6c799c"
	I1109 14:35:30.998026    1604 logs.go:123] Gathering logs for kube-controller-manager [f91f678ad8d3] ...
	I1109 14:35:30.998587    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f91f678ad8d3"
	I1109 14:35:31.034672    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:35:31.034672    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:35:31.064528    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:35:31.064528    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:35:31.159709    1604 logs.go:123] Gathering logs for kube-apiserver [578d9ee8b5ed] ...
	I1109 14:35:31.159709    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578d9ee8b5ed"
	I1109 14:35:31.200556    1604 logs.go:123] Gathering logs for kube-scheduler [c1fa8e945752] ...
	I1109 14:35:31.200556    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1fa8e945752"
	I1109 14:35:31.275841    1604 logs.go:123] Gathering logs for kube-scheduler [dcfa57dcafaf] ...
	I1109 14:35:31.275841    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcfa57dcafaf"
	I1109 14:35:31.321911    1604 logs.go:123] Gathering logs for kube-controller-manager [bbba73f9daf6] ...
	I1109 14:35:31.321911    1604 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbba73f9daf6"
	I1109 14:35:31.356111    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:35:31.356111    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:35:31.459103    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:35:31.459103    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:35:33.980160    1604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51763/healthz ...
	I1109 14:35:33.983172    1604 api_server.go:269] stopped: https://127.0.0.1:51763/healthz: Get "https://127.0.0.1:51763/healthz": EOF
	I1109 14:35:33.983172    1604 kubeadm.go:602] duration metric: took 4m7.0544135s to restartPrimaryControlPlane
	W1109 14:35:33.983172    1604 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1109 14:35:33.990450    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1109 14:36:21.723036    1604 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (47.7320305s)
	I1109 14:36:21.730100    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:36:21.758893    1604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:36:21.774929    1604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:36:21.782601    1604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:36:21.798054    1604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:36:21.798054    1604 kubeadm.go:158] found existing configuration files:
	
	I1109 14:36:21.806582    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1109 14:36:21.820492    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:36:21.828791    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:36:21.851341    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1109 14:36:21.867338    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:36:21.874335    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:36:21.897533    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1109 14:36:21.911524    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:36:21.918011    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:36:21.944573    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1109 14:36:21.961357    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:36:21.968841    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:36:21.997266    1604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:36:22.141932    1604 kubeadm.go:319] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 14:36:22.275596    1604 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:40:52.960787    1604 kubeadm.go:319] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 14:40:52.961312    1604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1109 14:40:52.966318    1604 kubeadm.go:319] [init] Using Kubernetes version: v1.28.3
	I1109 14:40:52.966492    1604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:40:52.966539    1604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:40:52.966539    1604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:40:52.966539    1604 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 14:40:52.967085    1604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:40:52.970282    1604 out.go:252]   - Generating certificates and keys ...
	I1109 14:40:52.970282    1604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:40:52.970282    1604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:40:52.970852    1604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 14:40:52.970852    1604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1109 14:40:52.970852    1604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1109 14:40:52.970852    1604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1109 14:40:52.970852    1604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1109 14:40:52.971486    1604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1109 14:40:52.971486    1604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 14:40:52.971486    1604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 14:40:52.971486    1604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1109 14:40:52.971486    1604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:40:52.972135    1604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:40:52.972274    1604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:40:52.972345    1604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:40:52.972345    1604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:40:52.972345    1604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:40:52.972345    1604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:40:52.974187    1604 out.go:252]   - Booting up control plane ...
	I1109 14:40:52.974781    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:40:52.974781    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:40:52.974781    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:40:52.974781    1604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:40:52.975351    1604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:40:52.975351    1604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:40:52.975880    1604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 14:40:52.975976    1604 kubeadm.go:319] [kubelet-check] Initial timeout of 40s passed.
	I1109 14:40:52.975976    1604 kubeadm.go:319] 
	I1109 14:40:52.975976    1604 kubeadm.go:319] Unfortunately, an error has occurred:
	I1109 14:40:52.975976    1604 kubeadm.go:319] 	timed out waiting for the condition
	I1109 14:40:52.975976    1604 kubeadm.go:319] 
	I1109 14:40:52.975976    1604 kubeadm.go:319] This error is likely caused by:
	I1109 14:40:52.975976    1604 kubeadm.go:319] 	- The kubelet is not running
	I1109 14:40:52.976500    1604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 14:40:52.976537    1604 kubeadm.go:319] 
	I1109 14:40:52.976627    1604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 14:40:52.976627    1604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1109 14:40:52.976627    1604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1109 14:40:52.976627    1604 kubeadm.go:319] 
	I1109 14:40:52.977183    1604 kubeadm.go:319] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 14:40:52.977263    1604 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1109 14:40:52.977263    1604 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1109 14:40:52.977263    1604 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I1109 14:40:52.977835    1604 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1109 14:40:52.977835    1604 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	W1109 14:40:52.977835    1604 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1109 14:40:52.985713    1604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1109 14:41:40.351651    1604 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (47.3653823s)
	I1109 14:41:40.360710    1604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:41:40.379667    1604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:41:40.389313    1604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:41:40.405263    1604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:41:40.405263    1604 kubeadm.go:158] found existing configuration files:
	
	I1109 14:41:40.415416    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1109 14:41:40.432357    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:41:40.440507    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:41:40.466717    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1109 14:41:40.483722    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:41:40.494726    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:41:40.519577    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1109 14:41:40.533575    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:41:40.540596    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:41:40.562372    1604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1109 14:41:40.577892    1604 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:41:40.588072    1604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:41:40.617137    1604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:41:40.753866    1604 kubeadm.go:319] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 14:41:40.872289    1604 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:46:03.134407    1604 kubeadm.go:319] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 14:46:03.134407    1604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1109 14:46:03.138880    1604 kubeadm.go:319] [init] Using Kubernetes version: v1.28.3
	I1109 14:46:03.138939    1604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:46:03.139260    1604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:46:03.139535    1604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:46:03.139857    1604 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 14:46:03.140080    1604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:46:03.142713    1604 out.go:252]   - Generating certificates and keys ...
	I1109 14:46:03.143322    1604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:46:03.143568    1604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:46:03.143755    1604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 14:46:03.143942    1604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1109 14:46:03.144190    1604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 14:46:03.144969    1604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:46:03.145559    1604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:46:03.145672    1604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:46:03.148963    1604 out.go:252]   - Booting up control plane ...
	I1109 14:46:03.148963    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:46:03.148963    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:46:03.148963    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:46:03.149923    1604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:46:03.149923    1604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:46:03.149923    1604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:46:03.149923    1604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 14:46:03.149923    1604 kubeadm.go:319] [kubelet-check] Initial timeout of 40s passed.
	I1109 14:46:03.149923    1604 kubeadm.go:319] 
	I1109 14:46:03.150930    1604 kubeadm.go:319] Unfortunately, an error has occurred:
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	timed out waiting for the condition
	I1109 14:46:03.150930    1604 kubeadm.go:319] 
	I1109 14:46:03.150930    1604 kubeadm.go:319] This error is likely caused by:
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	- The kubelet is not running
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 14:46:03.150930    1604 kubeadm.go:319] 
	I1109 14:46:03.150930    1604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1109 14:46:03.150930    1604 kubeadm.go:319] 
	I1109 14:46:03.151913    1604 kubeadm.go:319] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 14:46:03.151913    1604 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1109 14:46:03.151913    1604 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1109 14:46:03.151913    1604 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I1109 14:46:03.151913    1604 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1109 14:46:03.151913    1604 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I1109 14:46:03.151913    1604 kubeadm.go:403] duration metric: took 14m36.2712228s to StartCluster
	I1109 14:46:03.152926    1604 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:46:03.160077    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:46:03.256048    1604 cri.go:89] found id: "f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334"
	I1109 14:46:03.256048    1604 cri.go:89] found id: ""
	I1109 14:46:03.256048    1604 logs.go:282] 1 containers: [f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334]
	I1109 14:46:03.266177    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.274749    1604 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:46:03.281747    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:46:03.373310    1604 cri.go:89] found id: "6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc"
	I1109 14:46:03.373310    1604 cri.go:89] found id: ""
	I1109 14:46:03.373310    1604 logs.go:282] 1 containers: [6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc]
	I1109 14:46:03.381575    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.388574    1604 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:46:03.397569    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:46:03.483585    1604 cri.go:89] found id: ""
	I1109 14:46:03.483585    1604 logs.go:282] 0 containers: []
	W1109 14:46:03.483585    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:46:03.483585    1604 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:46:03.492604    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:46:03.582247    1604 cri.go:89] found id: "9916a784d53cc5f2ca77c62d475395dc21acbe3a4cc052be5908741fae99bfee"
	I1109 14:46:03.582247    1604 cri.go:89] found id: ""
	I1109 14:46:03.582247    1604 logs.go:282] 1 containers: [9916a784d53cc5f2ca77c62d475395dc21acbe3a4cc052be5908741fae99bfee]
	I1109 14:46:03.590240    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.599246    1604 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:46:03.607237    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:46:03.687234    1604 cri.go:89] found id: ""
	I1109 14:46:03.687234    1604 logs.go:282] 0 containers: []
	W1109 14:46:03.687234    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:46:03.687234    1604 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:46:03.694235    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:46:03.796939    1604 cri.go:89] found id: "bd1e895dfe296b588d9737b1a5eb36949c0c7eb030142f53536645f3348ee505"
	I1109 14:46:03.796965    1604 cri.go:89] found id: "df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9"
	I1109 14:46:03.796965    1604 cri.go:89] found id: ""
	I1109 14:46:03.797011    1604 logs.go:282] 2 containers: [bd1e895dfe296b588d9737b1a5eb36949c0c7eb030142f53536645f3348ee505 df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9]
	I1109 14:46:03.805759    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.820720    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.826719    1604 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:46:03.833719    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:46:03.924217    1604 cri.go:89] found id: ""
	I1109 14:46:03.924217    1604 logs.go:282] 0 containers: []
	W1109 14:46:03.924217    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:46:03.925221    1604 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:46:03.932233    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:46:04.024603    1604 cri.go:89] found id: ""
	I1109 14:46:04.024603    1604 logs.go:282] 0 containers: []
	W1109 14:46:04.024603    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:46:04.024603    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:46:04.024603    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:46:04.128605    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:46:04.128605    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:46:04.240535    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:46:04.232003   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.233049   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.234185   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.234963   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.237327   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:46:04.232003   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.233049   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.234185   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.234963   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.237327   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:46:04.240535    1604 logs.go:123] Gathering logs for kube-apiserver [f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334] ...
	I1109 14:46:04.240535    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334"
	I1109 14:46:04.329115    1604 logs.go:123] Gathering logs for kube-controller-manager [bd1e895dfe296b588d9737b1a5eb36949c0c7eb030142f53536645f3348ee505] ...
	I1109 14:46:04.329115    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1e895dfe296b588d9737b1a5eb36949c0c7eb030142f53536645f3348ee505"
	I1109 14:46:04.403632    1604 logs.go:123] Gathering logs for kube-controller-manager [df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9] ...
	I1109 14:46:04.403632    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9"
	I1109 14:46:04.474152    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:46:04.474152    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:46:04.554610    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:46:04.554610    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:46:04.583252    1604 logs.go:123] Gathering logs for etcd [6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc] ...
	I1109 14:46:04.583363    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc"
	I1109 14:46:04.677842    1604 logs.go:123] Gathering logs for kube-scheduler [9916a784d53cc5f2ca77c62d475395dc21acbe3a4cc052be5908741fae99bfee] ...
	I1109 14:46:04.677981    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9916a784d53cc5f2ca77c62d475395dc21acbe3a4cc052be5908741fae99bfee"
	I1109 14:46:04.803422    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:46:04.803422    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1109 14:46:04.912632    1604 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1109 14:46:04.912632    1604 out.go:285] * 
	* 
	W1109 14:46:04.912632    1604 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 14:46:04.913628    1604 out.go:285] * 
	* 
	W1109 14:46:04.915628    1604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:46:04.918630    1604 out.go:203] 
	W1109 14:46:04.922623    1604 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 14:46:04.922623    1604 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1109 14:46:04.922623    1604 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1109 14:46:04.925626    1604 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:331: failed missing container upgrade from v1.32.0. args: out/minikube-windows-amd64.exe start -p missing-upgrade-184300 --memory=3072 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:333: *** TestMissingContainerUpgrade FAILED at 2025-11-09 14:46:07.4721171 +0000 UTC m=+4617.288552401
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMissingContainerUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect missing-upgrade-184300
helpers_test.go:243: (dbg) docker inspect missing-upgrade-184300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "659a1e82b8fc95337e7a1e2eb9b70e9b9bf9f447e50159f3f7a32f8f3f29e0c0",
	        "Created": "2025-11-09T14:30:52.746953276Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216517,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:30:53.144838767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dbc648475405a75e8c472743ce721cb0b74db98d9501831a17a27a54e2bd3e47",
	        "ResolvConfPath": "/var/lib/docker/containers/659a1e82b8fc95337e7a1e2eb9b70e9b9bf9f447e50159f3f7a32f8f3f29e0c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/659a1e82b8fc95337e7a1e2eb9b70e9b9bf9f447e50159f3f7a32f8f3f29e0c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/659a1e82b8fc95337e7a1e2eb9b70e9b9bf9f447e50159f3f7a32f8f3f29e0c0/hosts",
	        "LogPath": "/var/lib/docker/containers/659a1e82b8fc95337e7a1e2eb9b70e9b9bf9f447e50159f3f7a32f8f3f29e0c0/659a1e82b8fc95337e7a1e2eb9b70e9b9bf9f447e50159f3f7a32f8f3f29e0c0-json.log",
	        "Name": "/missing-upgrade-184300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-184300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-184300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/10a3c7c30ae88645e96ab7fbf0f410adc2ab1dede765fedf76722ce367ab1213-init/diff:/var/lib/docker/overlay2/55ad083245039cbda63d9302ce4d213654ad22f536902bc2ed5a10b3790ee955/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10a3c7c30ae88645e96ab7fbf0f410adc2ab1dede765fedf76722ce367ab1213/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10a3c7c30ae88645e96ab7fbf0f410adc2ab1dede765fedf76722ce367ab1213/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10a3c7c30ae88645e96ab7fbf0f410adc2ab1dede765fedf76722ce367ab1213/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-184300",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-184300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-184300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-184300",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-184300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "186c53fb68a552b2f45c67480df88e8b614ec1bc94935556ece965780c580c8d",
	            "SandboxKey": "/var/run/docker/netns/186c53fb68a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51764"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51760"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51761"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51762"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51763"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-184300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b58f83d134bf85cd7031bc684b0ee1bcee8db3807517302e73afa040f155af07",
	                    "EndpointID": "1b2cdb5c2ae972255e704fcdda97168571f42fe3837615c7f9011e613e9e9106",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "missing-upgrade-184300",
	                        "659a1e82b8fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-184300 -n missing-upgrade-184300
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-184300 -n missing-upgrade-184300: exit status 2 (673.6208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMissingContainerUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMissingContainerUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p missing-upgrade-184300 logs -n 25
E1109 14:46:09.405419   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p missing-upgrade-184300 logs -n 25: (1.9471499s)
helpers_test.go:260: TestMissingContainerUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-643800 sudo iptables -t nat -L -n -v                                 │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl status kubelet --all --full --no-pager         │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl cat kubelet --no-pager                         │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo journalctl -xeu kubelet --all --full --no-pager          │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo cat /etc/kubernetes/kubelet.conf                         │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo cat /var/lib/kubelet/config.yaml                         │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl status docker --all --full --no-pager          │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl cat docker --no-pager                          │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo cat /etc/docker/daemon.json                              │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo docker system info                                       │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl status cri-docker --all --full --no-pager      │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl cat cri-docker --no-pager                      │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo cat /usr/lib/systemd/system/cri-docker.service           │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	│ ssh     │ -p calico-643800 sudo cri-dockerd --version                                    │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:46 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl status containerd --all --full --no-pager      │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │ 09 Nov 25 14:46 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl cat containerd --no-pager                      │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │ 09 Nov 25 14:46 UTC │
	│ ssh     │ -p calico-643800 sudo cat /lib/systemd/system/containerd.service               │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │ 09 Nov 25 14:46 UTC │
	│ ssh     │ -p calico-643800 sudo cat /etc/containerd/config.toml                          │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │ 09 Nov 25 14:46 UTC │
	│ ssh     │ -p calico-643800 sudo containerd config dump                                   │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │ 09 Nov 25 14:46 UTC │
	│ ssh     │ -p calico-643800 sudo systemctl status crio --all --full --no-pager            │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │                     │
	│ ssh     │ -p calico-643800 sudo systemctl cat crio --no-pager                            │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │ 09 Nov 25 14:46 UTC │
	│ ssh     │ -p calico-643800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │ 09 Nov 25 14:46 UTC │
	│ ssh     │ -p calico-643800 sudo crio config                                              │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │ 09 Nov 25 14:46 UTC │
	│ delete  │ -p calico-643800                                                               │ calico-643800 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 14:46 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:45:29
	Running on machine: minikube4
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:45:28.970626   11408 out.go:360] Setting OutFile to fd 1080 ...
	I1109 14:45:29.019806   11408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:45:29.019806   11408 out.go:374] Setting ErrFile to fd 1508...
	I1109 14:45:29.019806   11408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:45:29.042346   11408 out.go:368] Setting JSON to false
	I1109 14:45:29.047113   11408 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5478,"bootTime":1762694050,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1109 14:45:29.047113   11408 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1109 14:45:29.049508   11408 out.go:179] * [enable-default-cni-643800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1109 14:45:29.055576   11408 notify.go:221] Checking for updates...
	I1109 14:45:29.058660   11408 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1109 14:45:29.060624   11408 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:45:29.062623   11408 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1109 14:45:29.064622   11408 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:45:29.067614   11408 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:45:29.070616   11408 config.go:182] Loaded profile config "calico-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 14:45:29.070616   11408 config.go:182] Loaded profile config "false-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 14:45:29.071612   11408 config.go:182] Loaded profile config "missing-upgrade-184300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1109 14:45:29.071612   11408 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:45:29.209056   11408 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1109 14:45:29.215048   11408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:45:29.480547   11408 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:95 SystemTime:2025-11-09 14:45:29.460482668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 14:45:29.483544   11408 out.go:179] * Using the docker driver based on user configuration
	I1109 14:45:29.486535   11408 start.go:309] selected driver: docker
	I1109 14:45:29.486535   11408 start.go:930] validating driver "docker" against <nil>
	I1109 14:45:29.486535   11408 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:45:29.533463   11408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:45:29.788481   11408 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:95 SystemTime:2025-11-09 14:45:29.76810613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 14:45:29.789470   11408 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1109 14:45:29.789470   11408 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1109 14:45:29.789470   11408 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:45:29.793470   11408 out.go:179] * Using Docker Desktop driver with root privileges
	I1109 14:45:29.795469   11408 cni.go:84] Creating CNI manager for "bridge"
	I1109 14:45:29.795469   11408 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1109 14:45:29.795469   11408 start.go:353] cluster config:
	{Name:enable-default-cni-643800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:enable-default-cni-643800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:45:29.800470   11408 out.go:179] * Starting "enable-default-cni-643800" primary control-plane node in "enable-default-cni-643800" cluster
	I1109 14:45:29.802469   11408 cache.go:134] Beginning downloading kic base image for docker with docker
	I1109 14:45:29.806471   11408 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:45:29.808469   11408 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1109 14:45:29.808469   11408 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:45:29.808469   11408 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1109 14:45:29.808469   11408 cache.go:65] Caching tarball of preloaded images
	I1109 14:45:29.809469   11408 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 14:45:29.809469   11408 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1109 14:45:29.809469   11408 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\config.json ...
	I1109 14:45:29.809469   11408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\config.json: {Name:mk56b0474ecbe7121ceffe5bb621c1a595d30169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:29.892733   11408 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:45:29.892783   11408 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:45:29.892869   11408 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:45:29.892914   11408 start.go:360] acquireMachinesLock for enable-default-cni-643800: {Name:mk4ac0b9c855ef4ff1a4f22103ba006b7bea0840 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:45:29.893043   11408 start.go:364] duration metric: took 88.7µs to acquireMachinesLock for "enable-default-cni-643800"
	I1109 14:45:29.893321   11408 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-643800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:enable-default-cni-643800 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 14:45:29.893461   11408 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:45:27.029343    7796 oci.go:144] the created container "false-643800" has a running status.
	I1109 14:45:27.029343    7796 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-643800\id_rsa...
	I1109 14:45:27.516849    7796 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-643800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:45:28.179004    7796 cli_runner.go:164] Run: docker container inspect false-643800 --format={{.State.Status}}
	I1109 14:45:28.247819    7796 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:45:28.247819    7796 kic_runner.go:114] Args: [docker exec --privileged false-643800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:45:28.368113    7796 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-643800\id_rsa...
	I1109 14:45:30.711140    7796 cli_runner.go:164] Run: docker container inspect false-643800 --format={{.State.Status}}
	I1109 14:45:30.766983    7796 machine.go:94] provisionDockerMachine start ...
	I1109 14:45:30.776579    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:30.831314    7796 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:30.846133    7796 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54027 <nil> <nil>}
	I1109 14:45:30.846210    7796 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:45:31.024698    7796 main.go:143] libmachine: SSH cmd err, output: <nil>: false-643800
	
	I1109 14:45:31.024782    7796 ubuntu.go:182] provisioning hostname "false-643800"
	I1109 14:45:31.035056    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:31.101975    7796 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:31.102973    7796 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54027 <nil> <nil>}
	I1109 14:45:31.102973    7796 main.go:143] libmachine: About to run SSH command:
	sudo hostname false-643800 && echo "false-643800" | sudo tee /etc/hostname
	I1109 14:45:31.371942    7796 main.go:143] libmachine: SSH cmd err, output: <nil>: false-643800
	
	I1109 14:45:31.380055    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:31.440966    7796 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:31.441533    7796 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54027 <nil> <nil>}
	I1109 14:45:31.441533    7796 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-643800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-643800/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-643800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:45:31.744790    7796 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:45:31.744790    7796 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1109 14:45:31.744790    7796 ubuntu.go:190] setting up certificates
	I1109 14:45:31.744790    7796 provision.go:84] configureAuth start
	I1109 14:45:31.752080    7796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-643800
	I1109 14:45:31.809112    7796 provision.go:143] copyHostCerts
	I1109 14:45:31.809613    7796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1109 14:45:31.809673    7796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1109 14:45:31.809793    7796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1109 14:45:31.810899    7796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1109 14:45:31.810899    7796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1109 14:45:31.811215    7796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1109 14:45:31.812308    7796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1109 14:45:31.812357    7796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1109 14:45:31.812521    7796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1109 14:45:31.813643    7796 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.false-643800 san=[127.0.0.1 192.168.85.2 false-643800 localhost minikube]
	I1109 14:45:29.897028   11408 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:45:29.897028   11408 start.go:159] libmachine.API.Create for "enable-default-cni-643800" (driver="docker")
	I1109 14:45:29.897028   11408 client.go:173] LocalClient.Create starting
	I1109 14:45:29.897733   11408 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1109 14:45:29.897938   11408 main.go:143] libmachine: Decoding PEM data...
	I1109 14:45:29.897975   11408 main.go:143] libmachine: Parsing certificate...
	I1109 14:45:29.898102   11408 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1109 14:45:29.898266   11408 main.go:143] libmachine: Decoding PEM data...
	I1109 14:45:29.898304   11408 main.go:143] libmachine: Parsing certificate...
	I1109 14:45:29.905948   11408 cli_runner.go:164] Run: docker network inspect enable-default-cni-643800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:45:29.956941   11408 cli_runner.go:211] docker network inspect enable-default-cni-643800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:45:29.961944   11408 network_create.go:284] running [docker network inspect enable-default-cni-643800] to gather additional debugging logs...
	I1109 14:45:29.961944   11408 cli_runner.go:164] Run: docker network inspect enable-default-cni-643800
	W1109 14:45:30.016944   11408 cli_runner.go:211] docker network inspect enable-default-cni-643800 returned with exit code 1
	I1109 14:45:30.016944   11408 network_create.go:287] error running [docker network inspect enable-default-cni-643800]: docker network inspect enable-default-cni-643800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-643800 not found
	I1109 14:45:30.016944   11408 network_create.go:289] output of [docker network inspect enable-default-cni-643800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-643800 not found
	
	** /stderr **
	I1109 14:45:30.025945   11408 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:45:30.114949   11408 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:45:30.147173   11408 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:45:30.178668   11408 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:45:30.209370   11408 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:45:30.224550   11408 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1109 14:45:30.238240   11408 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001746a50}
	I1109 14:45:30.238240   11408 network_create.go:124] attempt to create docker network enable-default-cni-643800 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1109 14:45:30.243762   11408 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-643800 enable-default-cni-643800
	I1109 14:45:30.390814   11408 network_create.go:108] docker network enable-default-cni-643800 192.168.94.0/24 created
	I1109 14:45:30.390814   11408 kic.go:121] calculated static IP "192.168.94.2" for the "enable-default-cni-643800" container
	I1109 14:45:30.407749   11408 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:45:30.463755   11408 cli_runner.go:164] Run: docker volume create enable-default-cni-643800 --label name.minikube.sigs.k8s.io=enable-default-cni-643800 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:45:30.519748   11408 oci.go:103] Successfully created a docker volume enable-default-cni-643800
	I1109 14:45:30.525745   11408 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-643800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-643800 --entrypoint /usr/bin/test -v enable-default-cni-643800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:45:31.846378   11408 cli_runner.go:217] Completed: docker run --rm --name enable-default-cni-643800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-643800 --entrypoint /usr/bin/test -v enable-default-cni-643800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (1.3205702s)
	I1109 14:45:31.846418   11408 oci.go:107] Successfully prepared a docker volume enable-default-cni-643800
	I1109 14:45:31.846533   11408 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1109 14:45:31.846533   11408 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:45:31.851840   11408 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-643800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:45:32.455384    7796 provision.go:177] copyRemoteCerts
	I1109 14:45:32.467660    7796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:45:32.476856    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:32.528857    7796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54027 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-643800\id_rsa Username:docker}
	I1109 14:45:32.649228    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:45:32.680474    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I1109 14:45:32.712337    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:45:32.742200    7796 provision.go:87] duration metric: took 996.4038ms to configureAuth
	I1109 14:45:32.742200    7796 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:45:32.742841    7796 config.go:182] Loaded profile config "false-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 14:45:32.748935    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:32.803982    7796 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:32.804981    7796 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54027 <nil> <nil>}
	I1109 14:45:32.804981    7796 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 14:45:32.966489    7796 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 14:45:32.966489    7796 ubuntu.go:71] root file system type: overlay
	I1109 14:45:32.966489    7796 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 14:45:32.972994    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:33.035456    7796 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:33.035456    7796 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54027 <nil> <nil>}
	I1109 14:45:33.035456    7796 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 14:45:33.220015    7796 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 14:45:33.226026    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:33.280028    7796 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:33.280028    7796 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54027 <nil> <nil>}
	I1109 14:45:33.280028    7796 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 14:45:41.236495    7796 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-08 12:15:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-11-09 14:45:33.213710500 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1109 14:45:41.236495    7796 machine.go:97] duration metric: took 10.4693834s to provisionDockerMachine
	I1109 14:45:41.236495    7796 client.go:176] duration metric: took 33.3127619s to LocalClient.Create
	I1109 14:45:41.236495    7796 start.go:167] duration metric: took 33.3137639s to libmachine.API.Create "false-643800"
	I1109 14:45:41.236495    7796 start.go:293] postStartSetup for "false-643800" (driver="docker")
	I1109 14:45:41.237027    7796 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:45:41.246894    7796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:45:41.252001    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:41.308013    7796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54027 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-643800\id_rsa Username:docker}
	I1109 14:45:41.435350    7796 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:45:41.442352    7796 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:45:41.442352    7796 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:45:41.442352    7796 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1109 14:45:41.442352    7796 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1109 14:45:41.443354    7796 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem -> 103362.pem in /etc/ssl/certs
	I1109 14:45:41.450353    7796 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:45:41.463354    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem --> /etc/ssl/certs/103362.pem (1708 bytes)
	I1109 14:45:41.491349    7796 start.go:296] duration metric: took 254.3186ms for postStartSetup
	I1109 14:45:41.502347    7796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-643800
	I1109 14:45:41.552357    7796 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\config.json ...
	I1109 14:45:41.563350    7796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:45:41.569357    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:41.615356    7796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54027 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-643800\id_rsa Username:docker}
	I1109 14:45:41.748910    7796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:45:41.756908    7796 start.go:128] duration metric: took 33.83717s to createHost
	I1109 14:45:41.756908    7796 start.go:83] releasing machines lock for "false-643800", held for 33.8381616s
	I1109 14:45:41.761900    7796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-643800
	I1109 14:45:41.812891    7796 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1109 14:45:41.820720    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:41.820720    7796 ssh_runner.go:195] Run: cat /version.json
	I1109 14:45:41.826203    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:41.876594    7796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54027 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-643800\id_rsa Username:docker}
	I1109 14:45:41.877621    7796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54027 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-643800\id_rsa Username:docker}
	W1109 14:45:42.231665    7796 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1109 14:45:42.240137    7796 ssh_runner.go:195] Run: systemctl --version
	I1109 14:45:42.271675    7796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:45:42.279684    7796 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:45:42.289684    7796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	W1109 14:45:42.336526    7796 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1109 14:45:42.336526    7796 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1109 14:45:42.582645    7796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1109 14:45:42.609889    7796 cni.go:308] configured [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:45:42.609922    7796 start.go:496] detecting cgroup driver to use...
	I1109 14:45:42.609967    7796 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:45:42.610166    7796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:45:42.642043    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1109 14:45:42.664037    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1109 14:45:42.679039    7796 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1109 14:45:42.686043    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1109 14:45:42.723039    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1109 14:45:42.750322    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1109 14:45:42.782439    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1109 14:45:42.834458    7796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:45:42.863205    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1109 14:45:42.889110    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1109 14:45:43.105065    7796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1109 14:45:43.131117    7796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:45:43.154882    7796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:45:43.175436    7796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:43.322906    7796 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1109 14:45:43.516195    7796 start.go:496] detecting cgroup driver to use...
	I1109 14:45:43.516195    7796 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:45:43.525081    7796 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 14:45:43.557783    7796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:45:43.588290    7796 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:45:43.890073    7796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:45:43.923936    7796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 14:45:43.947343    7796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:45:43.979271    7796 ssh_runner.go:195] Run: which cri-dockerd
	I1109 14:45:43.994497    7796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 14:45:44.009486    7796 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1109 14:45:44.043558    7796 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 14:45:44.200184    7796 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 14:45:44.345194    7796 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1109 14:45:44.345364    7796 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1109 14:45:44.374272    7796 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1109 14:45:44.399695    7796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:44.561889    7796 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 14:45:45.506013    7796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:45:45.532014    7796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1109 14:45:45.563344    7796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1109 14:45:45.597550    7796 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1109 14:45:45.798183    7796 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 14:45:46.063113    7796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:46.243118    7796 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1109 14:45:46.274132    7796 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1109 14:45:46.301115    7796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:46.503072    7796 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1109 14:45:46.676633    7796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1109 14:45:46.703641    7796 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 14:45:46.711638    7796 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 14:45:46.719633    7796 start.go:564] Will wait 60s for crictl version
	I1109 14:45:46.727634    7796 ssh_runner.go:195] Run: which crictl
	I1109 14:45:46.741638    7796 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:45:46.791636    7796 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1109 14:45:46.799636    7796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 14:45:46.851632    7796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 14:45:46.900221    7796 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1109 14:45:46.907867    7796 cli_runner.go:164] Run: docker exec -t false-643800 dig +short host.docker.internal
	I1109 14:45:44.713844   11408 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-643800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (12.8618478s)
	I1109 14:45:44.714864   11408 kic.go:203] duration metric: took 12.8681739s to extract preloaded images to volume ...
	I1109 14:45:44.720842   11408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:45:44.975725   11408 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:true NGoroutines:95 SystemTime:2025-11-09 14:45:44.955105428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 14:45:44.981735   11408 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:45:45.265115   11408 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-643800 --name enable-default-cni-643800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-643800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-643800 --network enable-default-cni-643800 --ip 192.168.94.2 --volume enable-default-cni-643800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:45:46.029933   11408 cli_runner.go:164] Run: docker container inspect enable-default-cni-643800 --format={{.State.Running}}
	I1109 14:45:46.098114   11408 cli_runner.go:164] Run: docker container inspect enable-default-cni-643800 --format={{.State.Status}}
	I1109 14:45:46.160115   11408 cli_runner.go:164] Run: docker exec enable-default-cni-643800 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:45:46.277115   11408 oci.go:144] the created container "enable-default-cni-643800" has a running status.
	I1109 14:45:46.277115   11408 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-643800\id_rsa...
	I1109 14:45:46.395582   11408 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-643800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:45:46.490077   11408 cli_runner.go:164] Run: docker container inspect enable-default-cni-643800 --format={{.State.Status}}
	I1109 14:45:46.574635   11408 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:45:46.574635   11408 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-643800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:45:46.703641   11408 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-643800\id_rsa...
	I1109 14:45:47.075993    7796 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1109 14:45:47.085001    7796 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1109 14:45:47.092995    7796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:45:47.118005    7796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-643800
	I1109 14:45:47.181000    7796 kubeadm.go:884] updating cluster {Name:false-643800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:false-643800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:45:47.181998    7796 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1109 14:45:47.190006    7796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 14:45:47.229018    7796 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 14:45:47.229018    7796 docker.go:621] Images already preloaded, skipping extraction
	I1109 14:45:47.241002    7796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 14:45:47.286006    7796 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 14:45:47.286006    7796 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:45:47.286006    7796 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 docker true true} ...
	I1109 14:45:47.286006    7796 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=false-643800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:false-643800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false}
	I1109 14:45:47.295002    7796 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 14:45:47.378002    7796 cni.go:84] Creating CNI manager for "false"
	I1109 14:45:47.378002    7796 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:45:47.378002    7796 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-643800 NodeName:false-643800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:45:47.378002    7796 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "false-643800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:45:47.386010    7796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:45:47.399001    7796 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:45:47.406006    7796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:45:47.422008    7796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1109 14:45:47.446997    7796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:45:47.468999    7796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1109 14:45:47.500004    7796 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:45:47.508001    7796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:45:47.538004    7796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:47.704911    7796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:45:47.736241    7796 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800 for IP: 192.168.85.2
	I1109 14:45:47.736241    7796 certs.go:195] generating shared ca certs ...
	I1109 14:45:47.736241    7796 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:47.737385    7796 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1109 14:45:47.737532    7796 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1109 14:45:47.737532    7796 certs.go:257] generating profile certs ...
	I1109 14:45:47.738457    7796 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\client.key
	I1109 14:45:47.738547    7796 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\client.crt with IP's: []
	I1109 14:45:47.837209    7796 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\client.crt ...
	I1109 14:45:47.837209    7796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\client.crt: {Name:mka917f8febc88303213c555c214928fc9c62486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:47.838210    7796 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\client.key ...
	I1109 14:45:47.838210    7796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\client.key: {Name:mk120586f0ca83e8a7283cdfc3568151de752e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:47.839210    7796 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.key.987b1b43
	I1109 14:45:47.839210    7796 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.crt.987b1b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1109 14:45:47.936876    7796 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.crt.987b1b43 ...
	I1109 14:45:47.936876    7796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.crt.987b1b43: {Name:mk27c51ec3c512c9f6517d69c4e42b6d585678ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:47.936876    7796 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.key.987b1b43 ...
	I1109 14:45:47.936876    7796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.key.987b1b43: {Name:mkf90670b67daef79447d175988a1912c70f3d1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:47.938760    7796 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.crt.987b1b43 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.crt
	I1109 14:45:47.955385    7796 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.key.987b1b43 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.key
	I1109 14:45:47.956403    7796 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\proxy-client.key
	I1109 14:45:47.956403    7796 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\proxy-client.crt with IP's: []
	I1109 14:45:48.138341    7796 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\proxy-client.crt ...
	I1109 14:45:48.138341    7796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\proxy-client.crt: {Name:mkf8ed9ead3ebef7eec385ecde96c52f719438d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:48.138926    7796 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\proxy-client.key ...
	I1109 14:45:48.138926    7796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\proxy-client.key: {Name:mk61d62cf349592cfec5207f1006419e28b429c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:48.156372    7796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336.pem (1338 bytes)
	W1109 14:45:48.156372    7796 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336_empty.pem, impossibly tiny 0 bytes
	I1109 14:45:48.156921    7796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1109 14:45:48.157226    7796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1109 14:45:48.157569    7796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1109 14:45:48.157881    7796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1109 14:45:48.157912    7796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem (1708 bytes)
	I1109 14:45:48.159839    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:45:48.191449    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:45:48.228230    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:45:48.266858    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:45:48.305377    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 14:45:48.341389    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:45:48.372394    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:45:48.402381    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-643800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:45:48.436381    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:45:48.468380    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336.pem --> /usr/share/ca-certificates/10336.pem (1338 bytes)
	I1109 14:45:48.503388    7796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem --> /usr/share/ca-certificates/103362.pem (1708 bytes)
	I1109 14:45:48.533389    7796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:45:48.576378    7796 ssh_runner.go:195] Run: openssl version
	I1109 14:45:48.601510    7796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:45:48.629553    7796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:45:48.639608    7796 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:31 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:45:48.647773    7796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:45:48.716586    7796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:45:48.744588    7796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10336.pem && ln -fs /usr/share/ca-certificates/10336.pem /etc/ssl/certs/10336.pem"
	I1109 14:45:48.766588    7796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10336.pem
	I1109 14:45:48.773601    7796 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:39 /usr/share/ca-certificates/10336.pem
	I1109 14:45:48.784599    7796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10336.pem
	I1109 14:45:48.840586    7796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10336.pem /etc/ssl/certs/51391683.0"
	I1109 14:45:48.867073    7796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103362.pem && ln -fs /usr/share/ca-certificates/103362.pem /etc/ssl/certs/103362.pem"
	I1109 14:45:48.896762    7796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103362.pem
	I1109 14:45:48.904730    7796 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:39 /usr/share/ca-certificates/103362.pem
	I1109 14:45:48.911729    7796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103362.pem
	I1109 14:45:48.971326    7796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103362.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:45:48.995334    7796 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:45:49.003327    7796 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:45:49.004332    7796 kubeadm.go:401] StartCluster: {Name:false-643800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:false-643800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:45:49.009340    7796 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 14:45:49.047329    7796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:45:49.067328    7796 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:45:49.088973    7796 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:45:49.100328    7796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:45:49.118504    7796 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:45:49.118504    7796 kubeadm.go:158] found existing configuration files:
	
	I1109 14:45:49.126659    7796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:45:49.139657    7796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:45:49.146657    7796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:45:49.165665    7796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:45:49.181656    7796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:45:49.189669    7796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:45:49.209660    7796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:45:49.227658    7796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:45:49.239656    7796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:45:49.267658    7796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:45:49.286658    7796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:45:49.296661    7796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:45:49.325656    7796 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:45:49.453674    7796 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1109 14:45:49.456662    7796 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:45:49.587507    7796 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:45:49.186658   11408 cli_runner.go:164] Run: docker container inspect enable-default-cni-643800 --format={{.State.Status}}
	I1109 14:45:49.249653   11408 machine.go:94] provisionDockerMachine start ...
	I1109 14:45:49.259654   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:49.338654   11408 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:49.356656   11408 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54091 <nil> <nil>}
	I1109 14:45:49.356656   11408 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:45:49.520513   11408 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-643800
	
	I1109 14:45:49.520513   11408 ubuntu.go:182] provisioning hostname "enable-default-cni-643800"
	I1109 14:45:49.526513   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:49.585507   11408 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:49.585507   11408 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54091 <nil> <nil>}
	I1109 14:45:49.585507   11408 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-643800 && echo "enable-default-cni-643800" | sudo tee /etc/hostname
	I1109 14:45:49.763877   11408 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-643800
	
	I1109 14:45:49.769895   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:49.830888   11408 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:49.831283   11408 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54091 <nil> <nil>}
	I1109 14:45:49.831283   11408 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-643800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-643800/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-643800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:45:50.013965   11408 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:45:50.013965   11408 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1109 14:45:50.013965   11408 ubuntu.go:190] setting up certificates
	I1109 14:45:50.013965   11408 provision.go:84] configureAuth start
	I1109 14:45:50.023985   11408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-643800
	I1109 14:45:50.093551   11408 provision.go:143] copyHostCerts
	I1109 14:45:50.093869   11408 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1109 14:45:50.093869   11408 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1109 14:45:50.093869   11408 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1109 14:45:50.095026   11408 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1109 14:45:50.095026   11408 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1109 14:45:50.095026   11408 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1109 14:45:50.096458   11408 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1109 14:45:50.096458   11408 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1109 14:45:50.096789   11408 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1109 14:45:50.097246   11408 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.enable-default-cni-643800 san=[127.0.0.1 192.168.94.2 enable-default-cni-643800 localhost minikube]
	I1109 14:45:50.121799   11408 provision.go:177] copyRemoteCerts
	I1109 14:45:50.131824   11408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:45:50.138799   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:50.200058   11408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54091 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-643800\id_rsa Username:docker}
	I1109 14:45:50.326427   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:45:50.366807   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1109 14:45:50.401032   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:45:50.439127   11408 provision.go:87] duration metric: took 425.1361ms to configureAuth
	I1109 14:45:50.439168   11408 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:45:50.439609   11408 config.go:182] Loaded profile config "enable-default-cni-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 14:45:50.448159   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:50.512223   11408 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:50.512223   11408 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54091 <nil> <nil>}
	I1109 14:45:50.512223   11408 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 14:45:50.695472   11408 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 14:45:50.696036   11408 ubuntu.go:71] root file system type: overlay
	I1109 14:45:50.696160   11408 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 14:45:50.703673   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:50.778530   11408 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:50.779352   11408 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54091 <nil> <nil>}
	I1109 14:45:50.779441   11408 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 14:45:50.984564   11408 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 14:45:50.989559   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:51.045390   11408 main.go:143] libmachine: Using SSH client type: native
	I1109 14:45:51.045963   11408 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x11319e0] 0x1134500 <nil>  [] 0s} 127.0.0.1 54091 <nil> <nil>}
	I1109 14:45:51.046083   11408 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 14:45:52.656307   11408 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-08 12:15:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-11-09 14:45:50.973057714 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1109 14:45:52.656307   11408 machine.go:97] duration metric: took 3.4066121s to provisionDockerMachine
	I1109 14:45:52.656307   11408 client.go:176] duration metric: took 22.7590009s to LocalClient.Create
	I1109 14:45:52.656307   11408 start.go:167] duration metric: took 22.7590009s to libmachine.API.Create "enable-default-cni-643800"
	I1109 14:45:52.656307   11408 start.go:293] postStartSetup for "enable-default-cni-643800" (driver="docker")
	I1109 14:45:52.656307   11408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:45:52.663807   11408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:45:52.673849   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:52.735544   11408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54091 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-643800\id_rsa Username:docker}
	I1109 14:45:52.873409   11408 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:45:52.881042   11408 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:45:52.881042   11408 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:45:52.881042   11408 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1109 14:45:52.881042   11408 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1109 14:45:52.882009   11408 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem -> 103362.pem in /etc/ssl/certs
	I1109 14:45:52.889865   11408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:45:52.908143   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem --> /etc/ssl/certs/103362.pem (1708 bytes)
	I1109 14:45:52.947323   11408 start.go:296] duration metric: took 291.0125ms for postStartSetup
	I1109 14:45:52.956411   11408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-643800
	I1109 14:45:53.022766   11408 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\config.json ...
	I1109 14:45:53.034251   11408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:45:53.040173   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:53.097720   11408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54091 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-643800\id_rsa Username:docker}
	I1109 14:45:53.227194   11408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:45:53.244525   11408 start.go:128] duration metric: took 23.3507798s to createHost
	I1109 14:45:53.244525   11408 start.go:83] releasing machines lock for "enable-default-cni-643800", held for 23.3511093s
	I1109 14:45:53.251947   11408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-643800
	I1109 14:45:53.304937   11408 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1109 14:45:53.311936   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:53.311936   11408 ssh_runner.go:195] Run: cat /version.json
	I1109 14:45:53.320782   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:53.381560   11408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54091 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-643800\id_rsa Username:docker}
	I1109 14:45:53.382569   11408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54091 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\enable-default-cni-643800\id_rsa Username:docker}
	W1109 14:45:53.493533   11408 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1109 14:45:53.500659   11408 ssh_runner.go:195] Run: systemctl --version
	I1109 14:45:53.520827   11408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:45:53.529338   11408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:45:53.537834   11408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1109 14:45:53.598798   11408 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1109 14:45:53.598885   11408 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1109 14:45:53.605568   11408 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:45:53.605635   11408 start.go:496] detecting cgroup driver to use...
	I1109 14:45:53.605679   11408 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:45:53.605824   11408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:45:53.644083   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1109 14:45:53.670466   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1109 14:45:53.686422   11408 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1109 14:45:53.695980   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1109 14:45:53.726872   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1109 14:45:53.751866   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1109 14:45:53.780495   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1109 14:45:53.808360   11408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:45:53.841350   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1109 14:45:53.864364   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1109 14:45:53.892413   11408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1109 14:45:53.927095   11408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:45:53.957390   11408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:45:53.979325   11408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:54.151708   11408 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1109 14:45:54.329018   11408 start.go:496] detecting cgroup driver to use...
	I1109 14:45:54.329128   11408 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:45:54.337793   11408 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 14:45:54.375853   11408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:45:54.416738   11408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:45:54.446747   11408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:45:54.477693   11408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 14:45:54.499559   11408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:45:54.537074   11408 ssh_runner.go:195] Run: which cri-dockerd
	I1109 14:45:54.556435   11408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 14:45:54.571448   11408 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1109 14:45:54.598441   11408 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 14:45:54.777576   11408 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 14:45:54.889901   11408 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1109 14:45:54.889901   11408 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1109 14:45:54.919043   11408 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1109 14:45:54.946083   11408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:55.124509   11408 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 14:45:56.053557   11408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:45:56.096296   11408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1109 14:45:56.181110   11408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1109 14:45:56.213017   11408 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1109 14:45:56.377255   11408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 14:45:56.538269   11408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:56.732541   11408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1109 14:45:56.764546   11408 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1109 14:45:56.793705   11408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:56.946863   11408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1109 14:45:57.086253   11408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1109 14:45:57.117379   11408 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 14:45:57.124563   11408 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 14:45:57.132521   11408 start.go:564] Will wait 60s for crictl version
	I1109 14:45:57.139522   11408 ssh_runner.go:195] Run: which crictl
	I1109 14:45:57.154530   11408 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:45:57.205531   11408 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1109 14:45:57.211526   11408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 14:45:57.281637   11408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 14:45:57.324634   11408 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1109 14:45:57.330631   11408 cli_runner.go:164] Run: docker exec -t enable-default-cni-643800 dig +short host.docker.internal
	I1109 14:45:57.478234   11408 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1109 14:45:57.487250   11408 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1109 14:45:57.495235   11408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:45:57.516235   11408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-643800
	I1109 14:45:57.578880   11408 kubeadm.go:884] updating cluster {Name:enable-default-cni-643800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:enable-default-cni-643800 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:45:57.578880   11408 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1109 14:45:57.586079   11408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 14:45:57.626549   11408 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 14:45:57.626549   11408 docker.go:621] Images already preloaded, skipping extraction
	I1109 14:45:57.632544   11408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 14:45:57.667162   11408 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 14:45:57.667162   11408 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:45:57.667162   11408 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 docker true true} ...
	I1109 14:45:57.667162   11408 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-643800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:enable-default-cni-643800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1109 14:45:57.678189   11408 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 14:45:57.767132   11408 cni.go:84] Creating CNI manager for "bridge"
	I1109 14:45:57.767132   11408 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:45:57.767132   11408 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-643800 NodeName:enable-default-cni-643800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:45:57.767811   11408 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "enable-default-cni-643800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:45:57.775443   11408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:45:57.794747   11408 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:45:57.806679   11408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:45:57.821675   11408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1109 14:45:57.842679   11408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:45:57.864681   11408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1109 14:45:57.894671   11408 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:45:57.902671   11408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:45:57.930657   11408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:45:58.079896   11408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:45:58.110619   11408 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800 for IP: 192.168.94.2
	I1109 14:45:58.110619   11408 certs.go:195] generating shared ca certs ...
	I1109 14:45:58.111633   11408 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:58.111633   11408 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1109 14:45:58.112388   11408 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1109 14:45:58.112515   11408 certs.go:257] generating profile certs ...
	I1109 14:45:58.113024   11408 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\client.key
	I1109 14:45:58.113024   11408 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\client.crt with IP's: []
	I1109 14:45:58.407347   11408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\client.crt ...
	I1109 14:45:58.407347   11408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\client.crt: {Name:mk646ecc4870dc3e3c1170cbcfc1d9f46573374a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:58.408348   11408 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\client.key ...
	I1109 14:45:58.408348   11408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\client.key: {Name:mk162bb690ebec5035ef9f1ccda0da52e46bbe68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:58.409347   11408 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.key.66482264
	I1109 14:45:58.409347   11408 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.crt.66482264 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1109 14:45:58.913565   11408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.crt.66482264 ...
	I1109 14:45:58.913633   11408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.crt.66482264: {Name:mk967fa8ffc680c89080db30ee1a48890c9cd905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:58.914308   11408 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.key.66482264 ...
	I1109 14:45:58.914308   11408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.key.66482264: {Name:mk29696c576cd7e10a36da9e76b31558543fc5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:58.914967   11408 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.crt.66482264 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.crt
	I1109 14:45:58.930031   11408 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.key.66482264 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.key
	I1109 14:45:58.931372   11408 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\proxy-client.key
	I1109 14:45:58.931565   11408 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\proxy-client.crt with IP's: []
	I1109 14:46:03.134407    1604 kubeadm.go:319] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 14:46:03.134407    1604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1109 14:46:03.138880    1604 kubeadm.go:319] [init] Using Kubernetes version: v1.28.3
	I1109 14:46:03.138939    1604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:46:03.139260    1604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:46:03.139535    1604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:46:03.139857    1604 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 14:46:03.140080    1604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:46:03.142713    1604 out.go:252]   - Generating certificates and keys ...
	I1109 14:46:03.143322    1604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:46:03.143568    1604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:46:03.143755    1604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 14:46:03.143942    1604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1109 14:46:03.144190    1604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 14:46:03.144415    1604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 14:46:03.144969    1604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:46:03.145024    1604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:46:03.145559    1604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:46:03.145672    1604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:45:59.586046   11408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\proxy-client.crt ...
	I1109 14:45:59.586046   11408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\proxy-client.crt: {Name:mkb6a6f1e3e2bf14ccef5f286fe334aedb902e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:59.587286   11408 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\proxy-client.key ...
	I1109 14:45:59.587286   11408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\proxy-client.key: {Name:mkd87deef998e0a63e96ef61479803f7d084e3ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:45:59.604879   11408 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336.pem (1338 bytes)
	W1109 14:45:59.604879   11408 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336_empty.pem, impossibly tiny 0 bytes
	I1109 14:45:59.604879   11408 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1109 14:45:59.605419   11408 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1109 14:45:59.605626   11408 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1109 14:45:59.605836   11408 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1109 14:45:59.606403   11408 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem (1708 bytes)
	I1109 14:45:59.608181   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:45:59.646117   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:45:59.689882   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:45:59.718873   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:45:59.757818   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1109 14:45:59.796219   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:45:59.823223   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:45:59.850227   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-643800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:45:59.881225   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\10336.pem --> /usr/share/ca-certificates/10336.pem (1338 bytes)
	I1109 14:45:59.912225   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\103362.pem --> /usr/share/ca-certificates/103362.pem (1708 bytes)
	I1109 14:45:59.939230   11408 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:45:59.972718   11408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:46:00.005167   11408 ssh_runner.go:195] Run: openssl version
	I1109 14:46:00.024869   11408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10336.pem && ln -fs /usr/share/ca-certificates/10336.pem /etc/ssl/certs/10336.pem"
	I1109 14:46:00.050396   11408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10336.pem
	I1109 14:46:00.061476   11408 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:39 /usr/share/ca-certificates/10336.pem
	I1109 14:46:00.074389   11408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10336.pem
	I1109 14:46:00.133625   11408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10336.pem /etc/ssl/certs/51391683.0"
	I1109 14:46:00.154627   11408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103362.pem && ln -fs /usr/share/ca-certificates/103362.pem /etc/ssl/certs/103362.pem"
	I1109 14:46:00.176636   11408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103362.pem
	I1109 14:46:00.184633   11408 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:39 /usr/share/ca-certificates/103362.pem
	I1109 14:46:00.191627   11408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103362.pem
	I1109 14:46:00.266853   11408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103362.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:46:00.288851   11408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:46:00.322297   11408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:46:00.334612   11408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:31 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:46:00.342505   11408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:46:00.411787   11408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:46:00.433805   11408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:46:00.447513   11408 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:46:00.448305   11408 kubeadm.go:401] StartCluster: {Name:enable-default-cni-643800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:enable-default-cni-643800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:46:00.457134   11408 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 14:46:00.509273   11408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:46:00.531285   11408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:46:00.554287   11408 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:46:00.566291   11408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:46:00.580279   11408 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:46:00.580279   11408 kubeadm.go:158] found existing configuration files:
	
	I1109 14:46:00.587272   11408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:46:00.600273   11408 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:46:00.607277   11408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:46:00.627273   11408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:46:00.640279   11408 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:46:00.646282   11408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:46:00.673654   11408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:46:00.690160   11408 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:46:00.697448   11408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:46:00.718551   11408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:46:00.730555   11408 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:46:00.737550   11408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:46:00.761383   11408 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:46:00.901595   11408 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1109 14:46:00.907745   11408 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:46:01.038156   11408 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:46:03.148963    1604 out.go:252]   - Booting up control plane ...
	I1109 14:46:03.148963    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:46:03.148963    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:46:03.148963    1604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:46:03.149923    1604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:46:03.149923    1604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:46:03.149923    1604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:46:03.149923    1604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 14:46:03.149923    1604 kubeadm.go:319] [kubelet-check] Initial timeout of 40s passed.
	I1109 14:46:03.149923    1604 kubeadm.go:319] 
	I1109 14:46:03.150930    1604 kubeadm.go:319] Unfortunately, an error has occurred:
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	timed out waiting for the condition
	I1109 14:46:03.150930    1604 kubeadm.go:319] 
	I1109 14:46:03.150930    1604 kubeadm.go:319] This error is likely caused by:
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	- The kubelet is not running
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 14:46:03.150930    1604 kubeadm.go:319] 
	I1109 14:46:03.150930    1604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1109 14:46:03.150930    1604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1109 14:46:03.150930    1604 kubeadm.go:319] 
	I1109 14:46:03.151913    1604 kubeadm.go:319] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 14:46:03.151913    1604 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1109 14:46:03.151913    1604 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1109 14:46:03.151913    1604 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	I1109 14:46:03.151913    1604 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1109 14:46:03.151913    1604 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	I1109 14:46:03.151913    1604 kubeadm.go:403] duration metric: took 14m36.2712228s to StartCluster
	I1109 14:46:03.152926    1604 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:46:03.160077    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:46:03.256048    1604 cri.go:89] found id: "f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334"
	I1109 14:46:03.256048    1604 cri.go:89] found id: ""
	I1109 14:46:03.256048    1604 logs.go:282] 1 containers: [f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334]
	I1109 14:46:03.266177    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.274749    1604 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:46:03.281747    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:46:03.373310    1604 cri.go:89] found id: "6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc"
	I1109 14:46:03.373310    1604 cri.go:89] found id: ""
	I1109 14:46:03.373310    1604 logs.go:282] 1 containers: [6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc]
	I1109 14:46:03.381575    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.388574    1604 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:46:03.397569    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:46:03.483585    1604 cri.go:89] found id: ""
	I1109 14:46:03.483585    1604 logs.go:282] 0 containers: []
	W1109 14:46:03.483585    1604 logs.go:284] No container was found matching "coredns"
	I1109 14:46:03.483585    1604 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:46:03.492604    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:46:03.582247    1604 cri.go:89] found id: "9916a784d53cc5f2ca77c62d475395dc21acbe3a4cc052be5908741fae99bfee"
	I1109 14:46:03.582247    1604 cri.go:89] found id: ""
	I1109 14:46:03.582247    1604 logs.go:282] 1 containers: [9916a784d53cc5f2ca77c62d475395dc21acbe3a4cc052be5908741fae99bfee]
	I1109 14:46:03.590240    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.599246    1604 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:46:03.607237    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:46:03.687234    1604 cri.go:89] found id: ""
	I1109 14:46:03.687234    1604 logs.go:282] 0 containers: []
	W1109 14:46:03.687234    1604 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:46:03.687234    1604 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:46:03.694235    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:46:03.796939    1604 cri.go:89] found id: "bd1e895dfe296b588d9737b1a5eb36949c0c7eb030142f53536645f3348ee505"
	I1109 14:46:03.796965    1604 cri.go:89] found id: "df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9"
	I1109 14:46:03.796965    1604 cri.go:89] found id: ""
	I1109 14:46:03.797011    1604 logs.go:282] 2 containers: [bd1e895dfe296b588d9737b1a5eb36949c0c7eb030142f53536645f3348ee505 df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9]
	I1109 14:46:03.805759    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.820720    1604 ssh_runner.go:195] Run: which crictl
	I1109 14:46:03.826719    1604 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:46:03.833719    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:46:03.924217    1604 cri.go:89] found id: ""
	I1109 14:46:03.924217    1604 logs.go:282] 0 containers: []
	W1109 14:46:03.924217    1604 logs.go:284] No container was found matching "kindnet"
	I1109 14:46:03.925221    1604 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:46:03.932233    1604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:46:04.024603    1604 cri.go:89] found id: ""
	I1109 14:46:04.024603    1604 logs.go:282] 0 containers: []
	W1109 14:46:04.024603    1604 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:46:04.024603    1604 logs.go:123] Gathering logs for kubelet ...
	I1109 14:46:04.024603    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:46:04.128605    1604 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:46:04.128605    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:46:04.240535    1604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:46:04.232003   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.233049   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.234185   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.234963   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.237327   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1109 14:46:04.232003   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.233049   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.234185   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.234963   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:04.237327   22141 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:46:04.240535    1604 logs.go:123] Gathering logs for kube-apiserver [f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334] ...
	I1109 14:46:04.240535    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334"
	I1109 14:46:04.329115    1604 logs.go:123] Gathering logs for kube-controller-manager [bd1e895dfe296b588d9737b1a5eb36949c0c7eb030142f53536645f3348ee505] ...
	I1109 14:46:04.329115    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1e895dfe296b588d9737b1a5eb36949c0c7eb030142f53536645f3348ee505"
	I1109 14:46:04.403632    1604 logs.go:123] Gathering logs for kube-controller-manager [df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9] ...
	I1109 14:46:04.403632    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9"
	I1109 14:46:04.474152    1604 logs.go:123] Gathering logs for Docker ...
	I1109 14:46:04.474152    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1109 14:46:04.554610    1604 logs.go:123] Gathering logs for dmesg ...
	I1109 14:46:04.554610    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:46:04.583252    1604 logs.go:123] Gathering logs for etcd [6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc] ...
	I1109 14:46:04.583363    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc"
	I1109 14:46:04.677842    1604 logs.go:123] Gathering logs for kube-scheduler [9916a784d53cc5f2ca77c62d475395dc21acbe3a4cc052be5908741fae99bfee] ...
	I1109 14:46:04.677981    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9916a784d53cc5f2ca77c62d475395dc21acbe3a4cc052be5908741fae99bfee"
	I1109 14:46:04.803422    1604 logs.go:123] Gathering logs for container status ...
	I1109 14:46:04.803422    1604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1109 14:46:04.912632    1604 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1109 14:46:04.912632    1604 out.go:285] * 
	W1109 14:46:04.912632    1604 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 14:46:04.913628    1604 out.go:285] * 
	W1109 14:46:04.915628    1604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:46:04.918630    1604 out.go:203] 
	W1109 14:46:04.922623    1604 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.28.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 14:46:04.922623    1604 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1109 14:46:04.922623    1604 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1109 14:46:04.925626    1604 out.go:203] 
	I1109 14:46:05.277644    7796 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:46:05.277644    7796 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:46:05.278655    7796 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:46:05.278655    7796 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:46:05.278655    7796 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:46:05.278655    7796 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:46:05.282643    7796 out.go:252]   - Generating certificates and keys ...
	I1109 14:46:05.282643    7796 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:46:05.282643    7796 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:46:05.283661    7796 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:46:05.283661    7796 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:46:05.283661    7796 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:46:05.283661    7796 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:46:05.283661    7796 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:46:05.283661    7796 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [false-643800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:46:05.283661    7796 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:46:05.284652    7796 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [false-643800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:46:05.284652    7796 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:46:05.284652    7796 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:46:05.284652    7796 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:46:05.284652    7796 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:46:05.285655    7796 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:46:05.285655    7796 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:46:05.285655    7796 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:46:05.285655    7796 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:46:05.285655    7796 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:46:05.285655    7796 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:46:05.286669    7796 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:46:05.288645    7796 out.go:252]   - Booting up control plane ...
	I1109 14:46:05.288645    7796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:46:05.289644    7796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:46:05.289644    7796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:46:05.289644    7796 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:46:05.289644    7796 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:46:05.290653    7796 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:46:05.290653    7796 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:46:05.290653    7796 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:46:05.290653    7796 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:46:05.291661    7796 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:46:05.291661    7796 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.533654ms
	I1109 14:46:05.291661    7796 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:46:05.291661    7796 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1109 14:46:05.291661    7796 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:46:05.291661    7796 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:46:05.292653    7796 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.989090123s
	I1109 14:46:05.292653    7796 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.871204189s
	I1109 14:46:05.292653    7796 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.503121032s
	I1109 14:46:05.292653    7796 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:46:05.292653    7796 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:46:05.293660    7796 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:46:05.293660    7796 kubeadm.go:319] [mark-control-plane] Marking the node false-643800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:46:05.293660    7796 kubeadm.go:319] [bootstrap-token] Using token: xp31p4.camip6q89795tf2s
	I1109 14:46:05.296645    7796 out.go:252]   - Configuring RBAC rules ...
	I1109 14:46:05.296645    7796 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:46:05.296645    7796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:46:05.296645    7796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:46:05.297646    7796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:46:05.297646    7796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:46:05.297646    7796 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:46:05.297646    7796 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:46:05.298649    7796 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:46:05.298649    7796 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:46:05.298649    7796 kubeadm.go:319] 
	I1109 14:46:05.298649    7796 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:46:05.298649    7796 kubeadm.go:319] 
	I1109 14:46:05.298649    7796 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:46:05.298649    7796 kubeadm.go:319] 
	I1109 14:46:05.298649    7796 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:46:05.298649    7796 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:46:05.298649    7796 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:46:05.298649    7796 kubeadm.go:319] 
	I1109 14:46:05.298649    7796 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:46:05.299655    7796 kubeadm.go:319] 
	I1109 14:46:05.299655    7796 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:46:05.299655    7796 kubeadm.go:319] 
	I1109 14:46:05.299655    7796 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:46:05.299655    7796 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:46:05.299655    7796 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:46:05.299655    7796 kubeadm.go:319] 
	I1109 14:46:05.299655    7796 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:46:05.299655    7796 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:46:05.300649    7796 kubeadm.go:319] 
	I1109 14:46:05.300649    7796 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xp31p4.camip6q89795tf2s \
	I1109 14:46:05.300649    7796 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b30bf296f8ba4330d46a1aea8c13c780c9c73ecf70f88e144a6185b969bbb8f0 \
	I1109 14:46:05.300649    7796 kubeadm.go:319] 	--control-plane 
	I1109 14:46:05.300649    7796 kubeadm.go:319] 
	I1109 14:46:05.300649    7796 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:46:05.300649    7796 kubeadm.go:319] 
	I1109 14:46:05.300649    7796 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xp31p4.camip6q89795tf2s \
	I1109 14:46:05.301651    7796 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b30bf296f8ba4330d46a1aea8c13c780c9c73ecf70f88e144a6185b969bbb8f0 
	I1109 14:46:05.301651    7796 cni.go:84] Creating CNI manager for "false"
	I1109 14:46:05.301651    7796 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:46:05.313647    7796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:46:05.313647    7796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes false-643800 minikube.k8s.io/updated_at=2025_11_09T14_46_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=false-643800 minikube.k8s.io/primary=true
	I1109 14:46:05.372649    7796 ops.go:34] apiserver oom_adj: -16
	I1109 14:46:05.603658    7796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:46:06.104202    7796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:46:06.601099    7796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> Docker <==
	Nov 09 14:41:38 missing-upgrade-184300 cri-dockerd[1353]: time="2025-11-09T14:41:38Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\""
	Nov 09 14:41:38 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:38.771667345Z" level=info msg="ignoring event" container=f921044bc4010d5fe16edc35ef66be141dc2293f65c608e6514e5acd3f0b8527 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:41:38 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:38.949469531Z" level=info msg="ignoring event" container=6b2fd824fc88a31873f7cd3e89ce48c037090f6de1ff0345c3cc16dafae87e79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:41:39 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:39.163411167Z" level=info msg="ignoring event" container=1e78b95e06ed73fbf0d0d707eae246bbde954b4345003f339d3f8b7c1750aa10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:41:39 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:39.459691305Z" level=info msg="ignoring event" container=0476bfacf6cdc5d2baf993c73e32c9a3e7db783f8517fd032df776132bbf84af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:41:39 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:39.717092748Z" level=info msg="ignoring event" container=d5ca9e6c80af12d82c437a93c4bd31f11c3469c8950210f69ae83bed858bf14f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:41:39 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:39.915965128Z" level=info msg="ignoring event" container=89e8f2b5d716a421cd62e65b95aa282b8f84dfb8844cfe4207df478605a0da09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:41:40 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:40.152486094Z" level=info msg="ignoring event" container=d79455747d8a19dc6da0626ee30f2dcf932d63a0acdb08b2fe965b913cf96776 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:41:44 missing-upgrade-184300 cri-dockerd[1353]: time="2025-11-09T14:41:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8d25543ddd9d0aa9e987975d90f6de9cd30980c9056e97e4b056f9dec43013c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Nov 09 14:41:44 missing-upgrade-184300 cri-dockerd[1353]: time="2025-11-09T14:41:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a1867156088b23b7bbe0f82c7b80c04eac8afae67857bc48201177c805072946/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Nov 09 14:41:44 missing-upgrade-184300 cri-dockerd[1353]: time="2025-11-09T14:41:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a96bdc6b42e9bbf3b66b865218a9c1a1d2dae542344613604b0de9fa3f92ffb3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Nov 09 14:41:44 missing-upgrade-184300 cri-dockerd[1353]: time="2025-11-09T14:41:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e4a11dc8e78ec2a2bdb41920e55b3605740b2148a40c7fb67854adf06be1cc0b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Nov 09 14:41:45 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:45.107943896Z" level=info msg="ignoring event" container=1dc6fce7f9f507638a3b690b8c58a7600570a6a4814846de1548ce095788aa60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:41:46 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:41:46.494158291Z" level=info msg="ignoring event" container=2dfc8dd95bca3885829c0c09ae55deddd9c7f21f4bbd241f0709c0438fa56b56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:42:06 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:42:06.414534232Z" level=info msg="ignoring event" container=1ebe6fb9d6d3f7fbd5c4b1b55ea82cc08ae90fa9efa2e720b80f0f8972c032c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:42:09 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:42:09.591632197Z" level=info msg="ignoring event" container=d92aecfb5f8005415daff876285531b10a4fa74249a7da4fc4f9837220269fb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:42:27 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:42:27.969440856Z" level=info msg="ignoring event" container=2976784a85cfb383a2f8aa69479fb911034bba68efce5c1e3b5d3da75a8a9f50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:42:30 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:42:30.411044334Z" level=info msg="ignoring event" container=dc2d7a7e48210f29796d42d8431933c48173027f7803e74ecea64b67c835482c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:43:09 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:43:09.294705767Z" level=info msg="ignoring event" container=7b2d56b7c84d0320467d94ba4b42ebb12bae060f3b7812d3439b32235d06395b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:43:16 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:43:16.916196993Z" level=info msg="ignoring event" container=5f98a31873550dd1da7adb63661e37a78c16b8cbe7ef8728979c0a02b282d7e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:43:21 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:43:21.226086907Z" level=info msg="ignoring event" container=48a7544676173e4947f9eedd92fb989a060985bfb1c5698d9f6a8cfb45261ab1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:44:01 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:44:01.295171552Z" level=info msg="ignoring event" container=511037d1617cb967088aa3cf086fb99d90ae8ef3a41dc8ebee98cbdba48ce3e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:44:42 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:44:42.667589938Z" level=info msg="ignoring event" container=df0342a399cd6c5cdae1627914dd8cfa5c439d038612bfd86863d8be51c874e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:44:49 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:44:49.461678649Z" level=info msg="ignoring event" container=6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 14:45:03 missing-upgrade-184300 dockerd[1140]: time="2025-11-09T14:45:03.833101996Z" level=info msg="ignoring event" container=f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bd1e895dfe296       10baa1ca17068       About a minute ago   Running             kube-controller-manager   2                   a96bdc6b42e9b       kube-controller-manager-missing-upgrade-184300
	6c1d6990ec9a1       73deb9a3f7025       About a minute ago   Exited              etcd                      5                   a1867156088b2       etcd-missing-upgrade-184300
	f159ce501abc2       5374347291230       About a minute ago   Exited              kube-apiserver            4                   e4a11dc8e78ec       kube-apiserver-missing-upgrade-184300
	df0342a399cd6       10baa1ca17068       2 minutes ago        Exited              kube-controller-manager   1                   a96bdc6b42e9b       kube-controller-manager-missing-upgrade-184300
	9916a784d53cc       6d1b4fd1b182d       4 minutes ago        Running             kube-scheduler            0                   d8d25543ddd9d       kube-scheduler-missing-upgrade-184300
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:46:09.686811   22414 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:09.687905   22414 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:09.689296   22414 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:09.690336   22414 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	E1109 14:46:09.690960   22414 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +4.336134] tmpfs: Unknown parameter 'noswap'
	[  +4.379121] tmpfs: Unknown parameter 'noswap'
	[Nov 9 14:39] tmpfs: Unknown parameter 'noswap'
	[ +11.186367] tmpfs: Unknown parameter 'noswap'
	[Nov 9 14:40] tmpfs: Unknown parameter 'noswap'
	[  +9.212943] tmpfs: Unknown parameter 'noswap'
	[  +0.314873] tmpfs: Unknown parameter 'noswap'
	[Nov 9 14:41] tmpfs: Unknown parameter 'noswap'
	[ +20.758128] tmpfs: Unknown parameter 'noswap'
	[ +13.064022] tmpfs: Unknown parameter 'noswap'
	[ +12.755825] tmpfs: Unknown parameter 'noswap'
	[  +5.840615] tmpfs: Unknown parameter 'noswap'
	[  +3.084438] tmpfs: Unknown parameter 'noswap'
	[Nov 9 14:42] tmpfs: Unknown parameter 'noswap'
	[ +17.931690] tmpfs: Unknown parameter 'noswap'
	[ +24.532962] tmpfs: Unknown parameter 'noswap'
	[Nov 9 14:43] tmpfs: Unknown parameter 'noswap'
	[ +11.168491] tmpfs: Unknown parameter 'noswap'
	[ +17.839289] tmpfs: Unknown parameter 'noswap'
	[  +9.257633] tmpfs: Unknown parameter 'noswap'
	[  +2.228696] tmpfs: Unknown parameter 'noswap'
	[Nov 9 14:44] tmpfs: Unknown parameter 'noswap'
	[Nov 9 14:45] tmpfs: Unknown parameter 'noswap'
	[Nov 9 14:46] tmpfs: Unknown parameter 'noswap'
	[  +3.023154] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [6c1d6990ec9a] <==
	{"level":"warn","ts":"2025-11-09T14:44:49.444674Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-11-09T14:44:49.444837Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.103.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.103.2:2380","--initial-cluster=missing-upgrade-184300=https://192.168.103.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.103.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.103.2:2380","--name=missing-upgrade-184300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count
=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"warn","ts":"2025-11-09T14:44:49.4449Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-11-09T14:44:49.44491Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-09T14:44:49.444948Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-09T14:44:49.445105Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"missing-upgrade-184300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"info","ts":"2025-11-09T14:44:49.445187Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"missing-upgrade-184300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"warn","ts":"2025-11-09T14:44:49.445204Z","caller":"etcdmain/etcd.go:146","msg":"failed to start etcd","error":"listen tcp 192.168.103.2:2380: bind: cannot assign requested address"}
	{"level":"fatal","ts":"2025-11-09T14:44:49.445224Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"listen tcp 192.168.103.2:2380: bind: cannot assign requested address","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:250"}
	
	
	==> kernel <==
	 14:46:09 up  1:31,  0 users,  load average: 7.31, 6.40, 4.69
	Linux missing-upgrade-184300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [f159ce501abc] <==
	W1109 14:44:58.464345       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 14:44:59.130395       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 14:45:01.601014       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1109 14:45:03.809945       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [bd1e895dfe29] <==
	I1109 14:45:05.620000       1 serving.go:348] Generated self-signed cert in-memory
	I1109 14:45:05.924434       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1109 14:45:05.925083       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:45:05.928186       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:45:05.928533       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:45:05.929163       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1109 14:45:05.929376       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [df0342a399cd] <==
	I1109 14:43:11.332470       1 serving.go:348] Generated self-signed cert in-memory
	I1109 14:43:11.591792       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1109 14:43:11.591872       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:43:11.593517       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:43:11.593716       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:43:11.594301       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1109 14:43:11.594521       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1109 14:44:42.591859       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.103.2:8443/healthz\": dial tcp 192.168.103.2:8443: i/o timeout"
	
	
	==> kube-scheduler [9916a784d53c] <==
	W1109 14:45:29.325787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.103.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	I1109 14:45:29.325969       1 trace.go:236] Trace[1904761224]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (09-Nov-2025 14:44:59.325) (total time: 30001ms):
	Trace[1904761224]: ---"Objects listed" error:Get "https://192.168.103.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout 30001ms (14:45:29.325)
	Trace[1904761224]: [30.00147196s] [30.00147196s] END
	E1109 14:45:29.325996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.103.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	W1109 14:45:30.147196       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.103.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	I1109 14:45:30.147324       1 trace.go:236] Trace[767725962]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (09-Nov-2025 14:45:00.146) (total time: 30002ms):
	Trace[767725962]: ---"Objects listed" error:Get "https://192.168.103.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout 30002ms (14:45:30.147)
	Trace[767725962]: [30.002152182s] [30.002152182s] END
	E1109 14:45:30.147344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.103.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	W1109 14:45:30.819838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.103.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	I1109 14:45:30.819939       1 trace.go:236] Trace[494100178]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (09-Nov-2025 14:45:00.820) (total time: 30000ms):
	Trace[494100178]: ---"Objects listed" error:Get "https://192.168.103.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout 30000ms (14:45:30.819)
	Trace[494100178]: [30.000775598s] [30.000775598s] END
	E1109 14:45:30.819958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.103.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	W1109 14:46:06.529642       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.103.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	I1109 14:46:06.529758       1 trace.go:236] Trace[1195742103]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (09-Nov-2025 14:45:36.530) (total time: 30001ms):
	Trace[1195742103]: ---"Objects listed" error:Get "https://192.168.103.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout 30001ms (14:46:06.529)
	Trace[1195742103]: [30.001154593s] [30.001154593s] END
	E1109 14:46:06.529809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.103.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	W1109 14:46:09.698695       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.103.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	I1109 14:46:09.698864       1 trace.go:236] Trace[850138191]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (09-Nov-2025 14:45:39.699) (total time: 30001ms):
	Trace[850138191]: ---"Objects listed" error:Get "https://192.168.103.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout 30001ms (14:46:09.698)
	Trace[850138191]: [30.001166504s] [30.001166504s] END
	E1109 14:46:09.699131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.103.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	
	
	==> kubelet <==
	Nov 09 14:45:43 missing-upgrade-184300 kubelet[19241]: E1109 14:45:43.305414   19241 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"missing-upgrade-184300\" not found"
	Nov 09 14:45:47 missing-upgrade-184300 kubelet[19241]: E1109 14:45:47.102804   19241 kubelet_node_status.go:701] "Failed to set some node status fields" err="failed to validate nodeIP: node IP: \"192.168.103.2\" not found in the host's network interfaces" node="missing-upgrade-184300"
	Nov 09 14:45:47 missing-upgrade-184300 kubelet[19241]: I1109 14:45:47.116769   19241 scope.go:117] "RemoveContainer" containerID="6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc"
	Nov 09 14:45:47 missing-upgrade-184300 kubelet[19241]: E1109 14:45:47.117588   19241 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-missing-upgrade-184300_kube-system(abab79edb87c1fccd3ab8e9e9b238817)\"" pod="kube-system/etcd-missing-upgrade-184300" podUID="abab79edb87c1fccd3ab8e9e9b238817"
	Nov 09 14:45:53 missing-upgrade-184300 kubelet[19241]: E1109 14:45:53.305717   19241 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"missing-upgrade-184300\" not found"
	Nov 09 14:45:54 missing-upgrade-184300 kubelet[19241]: E1109 14:45:54.080847   19241 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.103.2:8443: i/o timeout" node="missing-upgrade-184300"
	Nov 09 14:45:55 missing-upgrade-184300 kubelet[19241]: E1109 14:45:55.103357   19241 kubelet_node_status.go:701] "Failed to set some node status fields" err="failed to validate nodeIP: node IP: \"192.168.103.2\" not found in the host's network interfaces" node="missing-upgrade-184300"
	Nov 09 14:45:55 missing-upgrade-184300 kubelet[19241]: I1109 14:45:55.115074   19241 scope.go:117] "RemoveContainer" containerID="f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334"
	Nov 09 14:45:55 missing-upgrade-184300 kubelet[19241]: E1109 14:45:55.115976   19241 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-missing-upgrade-184300_kube-system(37e4e85aab00465017c31e2a0d667cdc)\"" pod="kube-system/kube-apiserver-missing-upgrade-184300" podUID="37e4e85aab00465017c31e2a0d667cdc"
	Nov 09 14:45:55 missing-upgrade-184300 kubelet[19241]: E1109 14:45:55.710075   19241 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/missing-upgrade-184300?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Nov 09 14:46:01 missing-upgrade-184300 kubelet[19241]: E1109 14:46:01.081770   19241 kubelet_node_status.go:701] "Failed to set some node status fields" err="failed to validate nodeIP: node IP: \"192.168.103.2\" not found in the host's network interfaces" node="missing-upgrade-184300"
	Nov 09 14:46:01 missing-upgrade-184300 kubelet[19241]: I1109 14:46:01.095771   19241 kubelet_node_status.go:70] "Attempting to register node" node="missing-upgrade-184300"
	Nov 09 14:46:01 missing-upgrade-184300 kubelet[19241]: E1109 14:46:01.102958   19241 kubelet_node_status.go:701] "Failed to set some node status fields" err="failed to validate nodeIP: node IP: \"192.168.103.2\" not found in the host's network interfaces" node="missing-upgrade-184300"
	Nov 09 14:46:01 missing-upgrade-184300 kubelet[19241]: I1109 14:46:01.114736   19241 scope.go:117] "RemoveContainer" containerID="6c1d6990ec9a17213a491b9b31dc6b7a0c15a7cd579cd9b12f9b4eb7b925d0fc"
	Nov 09 14:46:01 missing-upgrade-184300 kubelet[19241]: E1109 14:46:01.115257   19241 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-missing-upgrade-184300_kube-system(abab79edb87c1fccd3ab8e9e9b238817)\"" pod="kube-system/etcd-missing-upgrade-184300" podUID="abab79edb87c1fccd3ab8e9e9b238817"
	Nov 09 14:46:03 missing-upgrade-184300 kubelet[19241]: E1109 14:46:03.306457   19241 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"missing-upgrade-184300\" not found"
	Nov 09 14:46:07 missing-upgrade-184300 kubelet[19241]: E1109 14:46:07.101839   19241 kubelet_node_status.go:701] "Failed to set some node status fields" err="failed to validate nodeIP: node IP: \"192.168.103.2\" not found in the host's network interfaces" node="missing-upgrade-184300"
	Nov 09 14:46:07 missing-upgrade-184300 kubelet[19241]: I1109 14:46:07.125981   19241 scope.go:117] "RemoveContainer" containerID="f159ce501abc24427240ea30e47c614499580593ebea41b7479437ced5f19334"
	Nov 09 14:46:07 missing-upgrade-184300 kubelet[19241]: E1109 14:46:07.126669   19241 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-missing-upgrade-184300_kube-system(37e4e85aab00465017c31e2a0d667cdc)\"" pod="kube-system/kube-apiserver-missing-upgrade-184300" podUID="37e4e85aab00465017c31e2a0d667cdc"
	Nov 09 14:46:07 missing-upgrade-184300 kubelet[19241]: W1109 14:46:07.134752   19241 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmissing-upgrade-184300&limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	Nov 09 14:46:07 missing-upgrade-184300 kubelet[19241]: I1109 14:46:07.134895   19241 trace.go:236] Trace[1665289309]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (09-Nov-2025 14:45:37.135) (total time: 30001ms):
	Nov 09 14:46:07 missing-upgrade-184300 kubelet[19241]: Trace[1665289309]: ---"Objects listed" error:Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmissing-upgrade-184300&limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout 30001ms (14:46:07.134)
	Nov 09 14:46:07 missing-upgrade-184300 kubelet[19241]: Trace[1665289309]: [30.001345257s] [30.001345257s] END
	Nov 09 14:46:07 missing-upgrade-184300 kubelet[19241]: E1109 14:46:07.134913   19241 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmissing-upgrade-184300&limit=500&resourceVersion=0": dial tcp 192.168.103.2:8443: i/o timeout
	Nov 09 14:46:09 missing-upgrade-184300 kubelet[19241]: E1109 14:46:09.223721   19241 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.103.2:8443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p missing-upgrade-184300 -n missing-upgrade-184300
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p missing-upgrade-184300 -n missing-upgrade-184300: exit status 2 (694.5805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "missing-upgrade-184300" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-184300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-184300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-184300: (3.4352734s)
--- FAIL: TestMissingContainerUpgrade (1098.54s)

                                                
                                    

Test pass (316/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.18
4 TestDownloadOnly/v1.28.0/preload-exists 0.05
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.27
9 TestDownloadOnly/v1.28.0/DeleteAll 1.26
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.91
12 TestDownloadOnly/v1.34.1/json-events 5.71
13 TestDownloadOnly/v1.34.1/preload-exists 0
16 TestDownloadOnly/v1.34.1/kubectl 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.21
18 TestDownloadOnly/v1.34.1/DeleteAll 1.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.47
20 TestDownloadOnlyKic 1.99
21 TestBinaryMirror 2.64
22 TestOffline 115.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.2
27 TestAddons/Setup 313.8
29 TestAddons/serial/Volcano 49.2
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 11.2
36 TestAddons/parallel/RegistryCreds 1.48
38 TestAddons/parallel/InspektorGadget 11.19
39 TestAddons/parallel/MetricsServer 8.05
41 TestAddons/parallel/CSI 53.73
42 TestAddons/parallel/Headlamp 36.13
43 TestAddons/parallel/CloudSpanner 7.57
44 TestAddons/parallel/LocalPath 57.63
45 TestAddons/parallel/NvidiaDevicePlugin 6.86
46 TestAddons/parallel/Yakd 13.23
47 TestAddons/parallel/AmdGpuDevicePlugin 7.08
48 TestAddons/StoppedEnableDisable 13.08
49 TestCertOptions 58.18
50 TestCertExpiration 280.75
51 TestDockerFlags 58.22
52 TestForceSystemdFlag 95.75
53 TestForceSystemdEnv 61.47
59 TestErrorSpam/start 2.76
60 TestErrorSpam/status 2.3
61 TestErrorSpam/pause 2.63
62 TestErrorSpam/unpause 2.6
63 TestErrorSpam/stop 19.73
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 90.52
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 53.73
70 TestFunctional/serial/KubeContext 0.1
71 TestFunctional/serial/KubectlGetPods 0.29
74 TestFunctional/serial/CacheCmd/cache/add_remote 10.16
75 TestFunctional/serial/CacheCmd/cache/add_local 4.49
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
77 TestFunctional/serial/CacheCmd/cache/list 0.22
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.62
79 TestFunctional/serial/CacheCmd/cache/cache_reload 4.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.39
81 TestFunctional/serial/MinikubeKubectlCmd 0.49
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.34
83 TestFunctional/serial/ExtraConfig 69.97
84 TestFunctional/serial/ComponentHealth 0.14
85 TestFunctional/serial/LogsCmd 1.75
86 TestFunctional/serial/LogsFileCmd 1.85
87 TestFunctional/serial/InvalidService 5.15
89 TestFunctional/parallel/ConfigCmd 1.22
91 TestFunctional/parallel/DryRun 1.67
92 TestFunctional/parallel/InternationalLanguage 0.66
93 TestFunctional/parallel/StatusCmd 2.19
98 TestFunctional/parallel/AddonsCmd 0.45
99 TestFunctional/parallel/PersistentVolumeClaim 63.68
101 TestFunctional/parallel/SSHCmd 1.15
102 TestFunctional/parallel/CpCmd 3.55
103 TestFunctional/parallel/MySQL 55.23
104 TestFunctional/parallel/FileSync 0.54
105 TestFunctional/parallel/CertSync 3.77
109 TestFunctional/parallel/NodeLabels 0.13
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
113 TestFunctional/parallel/License 1.55
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.33
115 TestFunctional/parallel/Version/short 0.18
116 TestFunctional/parallel/Version/components 1.02
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.51
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.47
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.48
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.74
121 TestFunctional/parallel/ImageCommands/ImageBuild 5.35
122 TestFunctional/parallel/ImageCommands/Setup 1.69
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.56
124 TestFunctional/parallel/ProfileCmd/profile_not_create 1.08
125 TestFunctional/parallel/ProfileCmd/profile_list 1.06
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.98
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.11
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.76
130 TestFunctional/parallel/ServiceCmd/List 0.79
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.41
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.69
135 TestFunctional/parallel/ServiceCmd/HTTPS 15.03
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.98
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.71
138 TestFunctional/parallel/ImageCommands/ImageRemove 1.18
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.14
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.91
141 TestFunctional/parallel/DockerEnv/powershell 5.47
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.32
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
151 TestFunctional/parallel/ServiceCmd/Format 15.01
152 TestFunctional/parallel/ServiceCmd/URL 15.01
153 TestFunctional/delete_echo-server_images 0.15
154 TestFunctional/delete_my-image_image 0.06
155 TestFunctional/delete_minikube_cached_images 0.06
160 TestMultiControlPlane/serial/StartCluster 246.59
161 TestMultiControlPlane/serial/DeployApp 8.95
162 TestMultiControlPlane/serial/PingHostFromPods 2.51
163 TestMultiControlPlane/serial/AddWorkerNode 57.62
164 TestMultiControlPlane/serial/NodeLabels 0.15
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 2.08
166 TestMultiControlPlane/serial/CopyFile 36.1
167 TestMultiControlPlane/serial/StopSecondaryNode 13.46
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.63
169 TestMultiControlPlane/serial/RestartSecondaryNode 104.16
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.09
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 200.88
172 TestMultiControlPlane/serial/DeleteSecondaryNode 14.5
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.6
174 TestMultiControlPlane/serial/StopCluster 38.07
175 TestMultiControlPlane/serial/RestartCluster 120.96
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.56
177 TestMultiControlPlane/serial/AddSecondaryNode 103.48
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.05
181 TestImageBuild/serial/Setup 54.65
182 TestImageBuild/serial/NormalBuild 4.54
183 TestImageBuild/serial/BuildWithBuildArg 2.11
184 TestImageBuild/serial/BuildWithDockerIgnore 1.22
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.26
190 TestJSONOutput/start/Command 83.9
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 1.17
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.9
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 12.13
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.68
215 TestKicCustomNetwork/create_custom_network 57.22
216 TestKicCustomNetwork/use_default_bridge_network 56.63
217 TestKicExistingNetwork 57.83
218 TestKicCustomSubnet 59.71
219 TestKicStaticIP 58.72
220 TestMainNoArgs 0.16
221 TestMinikubeProfile 108.68
224 TestMountStart/serial/StartWithMountFirst 14.35
225 TestMountStart/serial/VerifyMountFirst 0.59
226 TestMountStart/serial/StartWithMountSecond 14.08
227 TestMountStart/serial/VerifyMountSecond 0.56
228 TestMountStart/serial/DeleteFirst 2.44
229 TestMountStart/serial/VerifyMountPostDelete 0.56
230 TestMountStart/serial/Stop 1.91
231 TestMountStart/serial/RestartStopped 11.04
232 TestMountStart/serial/VerifyMountPostStop 0.58
235 TestMultiNode/serial/FreshStart2Nodes 131.16
236 TestMultiNode/serial/DeployApp2Nodes 7.2
237 TestMultiNode/serial/PingHostFrom2Pods 1.76
238 TestMultiNode/serial/AddNode 56.91
239 TestMultiNode/serial/MultiNodeLabels 0.13
240 TestMultiNode/serial/ProfileList 1.44
241 TestMultiNode/serial/CopyFile 19.76
242 TestMultiNode/serial/StopNode 3.93
243 TestMultiNode/serial/StartAfterStop 13.4
244 TestMultiNode/serial/RestartKeepsNodes 88.31
245 TestMultiNode/serial/DeleteNode 8.15
246 TestMultiNode/serial/StopMultiNode 23.92
247 TestMultiNode/serial/RestartMultiNode 56.04
248 TestMultiNode/serial/ValidateNameConflict 53.81
252 TestPreload 142.52
253 TestScheduledStopWindows 116.52
257 TestInsufficientStorage 31
258 TestRunningBinaryUpgrade 90.4
260 TestKubernetesUpgrade 436.73
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.25
264 TestNoKubernetes/serial/StartWithK8s 96.11
265 TestNoKubernetes/serial/StartWithStopK8s 25.22
266 TestStoppedBinaryUpgrade/Setup 0.94
267 TestStoppedBinaryUpgrade/Upgrade 97.78
268 TestNoKubernetes/serial/Start 54.81
269 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.69
271 TestNoKubernetes/serial/ProfileList 4.97
272 TestNoKubernetes/serial/Stop 2.05
273 TestNoKubernetes/serial/StartNoArgs 22.24
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.62
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.75
284 TestPause/serial/Start 91.35
285 TestPause/serial/SecondStartNoReconfiguration 58.61
286 TestPause/serial/Pause 1.12
287 TestPause/serial/VerifyStatus 0.68
288 TestPause/serial/Unpause 0.91
289 TestPause/serial/PauseAgain 1.3
290 TestPause/serial/DeletePaused 4.07
291 TestPause/serial/VerifyDeletedResources 1.86
304 TestStartStop/group/old-k8s-version/serial/FirstStart 74.71
306 TestStartStop/group/no-preload/serial/FirstStart 102.54
308 TestStartStop/group/embed-certs/serial/FirstStart 87.01
309 TestStartStop/group/old-k8s-version/serial/DeployApp 13.72
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 5.63
311 TestStartStop/group/old-k8s-version/serial/Stop 12.33
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.57
313 TestStartStop/group/old-k8s-version/serial/SecondStart 52.47
314 TestStartStop/group/embed-certs/serial/DeployApp 8.66
315 TestStartStop/group/no-preload/serial/DeployApp 10.68
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.9
317 TestStartStop/group/embed-certs/serial/Stop 12.5
318 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.76
319 TestStartStop/group/no-preload/serial/Stop 12.27
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.55
321 TestStartStop/group/embed-certs/serial/SecondStart 54.38
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.78
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.55
325 TestStartStop/group/no-preload/serial/SecondStart 67.68
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.74
327 TestStartStop/group/old-k8s-version/serial/Pause 5.61
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.03
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.29
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.54
333 TestStartStop/group/embed-certs/serial/Pause 5.22
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/newest-cni/serial/FirstStart 58.41
337 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.33
338 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
339 TestStartStop/group/no-preload/serial/Pause 10.53
340 TestNetworkPlugins/group/auto/Start 82.19
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.71
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.72
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.33
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.63
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.14
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.16
348 TestStartStop/group/newest-cni/serial/Stop 12.41
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.66
350 TestStartStop/group/newest-cni/serial/SecondStart 26.76
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
354 TestStartStop/group/newest-cni/serial/Pause 5.49
355 TestNetworkPlugins/group/auto/KubeletFlags 0.65
356 TestNetworkPlugins/group/auto/NetCatPod 15.53
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
358 TestNetworkPlugins/group/kindnet/Start 87.56
359 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.27
360 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.47
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 9.97
362 TestNetworkPlugins/group/auto/DNS 0.22
363 TestNetworkPlugins/group/auto/Localhost 0.2
364 TestNetworkPlugins/group/auto/HairPin 0.21
365 TestNetworkPlugins/group/calico/Start 117.74
366 TestNetworkPlugins/group/custom-flannel/Start 68.51
367 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
368 TestNetworkPlugins/group/kindnet/KubeletFlags 0.6
369 TestNetworkPlugins/group/kindnet/NetCatPod 17.56
370 TestNetworkPlugins/group/kindnet/DNS 0.25
371 TestNetworkPlugins/group/kindnet/Localhost 0.22
372 TestNetworkPlugins/group/kindnet/HairPin 0.27
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.55
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.6
375 TestNetworkPlugins/group/custom-flannel/DNS 0.28
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.34
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
378 TestNetworkPlugins/group/calico/ControllerPod 6.01
379 TestNetworkPlugins/group/calico/KubeletFlags 0.6
380 TestNetworkPlugins/group/false/Start 94.92
381 TestNetworkPlugins/group/calico/NetCatPod 25.67
382 TestNetworkPlugins/group/enable-default-cni/Start 104.68
383 TestNetworkPlugins/group/calico/DNS 0.24
384 TestNetworkPlugins/group/calico/Localhost 0.21
385 TestNetworkPlugins/group/calico/HairPin 0.21
386 TestNetworkPlugins/group/flannel/Start 84.17
387 TestNetworkPlugins/group/bridge/Start 88.44
388 TestNetworkPlugins/group/false/KubeletFlags 0.67
389 TestNetworkPlugins/group/false/NetCatPod 15.6
390 TestNetworkPlugins/group/false/DNS 0.24
391 TestNetworkPlugins/group/false/Localhost 0.2
392 TestNetworkPlugins/group/false/HairPin 0.2
393 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.64
394 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.76
395 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
396 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
397 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
398 TestNetworkPlugins/group/kubenet/Start 100.44
399 TestNetworkPlugins/group/flannel/ControllerPod 6.01
400 TestNetworkPlugins/group/flannel/KubeletFlags 0.6
401 TestNetworkPlugins/group/bridge/KubeletFlags 0.6
402 TestNetworkPlugins/group/flannel/NetCatPod 26.82
403 TestNetworkPlugins/group/bridge/NetCatPod 25.2
404 TestNetworkPlugins/group/bridge/DNS 0.27
405 TestNetworkPlugins/group/bridge/Localhost 0.22
406 TestNetworkPlugins/group/bridge/HairPin 0.21
407 TestNetworkPlugins/group/flannel/DNS 0.3
408 TestNetworkPlugins/group/flannel/Localhost 0.23
409 TestNetworkPlugins/group/flannel/HairPin 0.21
410 TestNetworkPlugins/group/kubenet/KubeletFlags 0.56
411 TestNetworkPlugins/group/kubenet/NetCatPod 14.53
412 TestNetworkPlugins/group/kubenet/DNS 0.23
413 TestNetworkPlugins/group/kubenet/Localhost 0.2
414 TestNetworkPlugins/group/kubenet/HairPin 0.21
x
+
TestDownloadOnly/v1.28.0/json-events (6.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-164700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-164700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (6.1761375s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1109 13:29:16.375849   10336 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1109 13:29:16.421212   10336 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-164700
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-164700: exit status 85 (268.8435ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-164700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-164700 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:10
	Running on machine: minikube4
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:10.273155    5164 out.go:360] Setting OutFile to fd 764 ...
	I1109 13:29:10.313969    5164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:10.313969    5164 out.go:374] Setting ErrFile to fd 768...
	I1109 13:29:10.313969    5164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1109 13:29:10.324533    5164 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1109 13:29:10.331857    5164 out.go:368] Setting JSON to true
	I1109 13:29:10.334830    5164 start.go:133] hostinfo: {"hostname":"minikube4","uptime":900,"bootTime":1762694050,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1109 13:29:10.335828    5164 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1109 13:29:10.347828    5164 out.go:99] [download-only-164700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	W1109 13:29:10.347828    5164 preload.go:354] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1109 13:29:10.348827    5164 notify.go:221] Checking for updates...
	I1109 13:29:10.351026    5164 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1109 13:29:10.352614    5164 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1109 13:29:10.354852    5164 out.go:171] MINIKUBE_LOCATION=21139
	I1109 13:29:10.356326    5164 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1109 13:29:10.360748    5164 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 13:29:10.361263    5164 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:10.579629    5164 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1109 13:29:10.585629    5164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:11.240640    5164 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:69 SystemTime:2025-11-09 13:29:11.218407419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 13:29:11.244150    5164 out.go:99] Using the docker driver based on user configuration
	I1109 13:29:11.244150    5164 start.go:309] selected driver: docker
	I1109 13:29:11.244150    5164 start.go:930] validating driver "docker" against <nil>
	I1109 13:29:11.255821    5164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:11.491298    5164 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:69 SystemTime:2025-11-09 13:29:11.473420809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 13:29:11.491298    5164 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:11.541612    5164 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1109 13:29:11.542249    5164 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 13:29:11.559700    5164 out.go:171] Using Docker Desktop driver with root privileges
	I1109 13:29:11.562027    5164 cni.go:84] Creating CNI manager for ""
	I1109 13:29:11.562158    5164 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1109 13:29:11.562158    5164 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:11.562158    5164 start.go:353] cluster config:
	{Name:download-only-164700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-164700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:11.565578    5164 out.go:99] Starting "download-only-164700" primary control-plane node in "download-only-164700" cluster
	I1109 13:29:11.565578    5164 cache.go:134] Beginning downloading kic base image for docker with docker
	I1109 13:29:11.566793    5164 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:29:11.567757    5164 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1109 13:29:11.567757    5164 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:29:11.616093    5164 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1109 13:29:11.616093    5164 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:11.616947    5164 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1109 13:29:11.628525    5164 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:29:11.628525    5164 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1761985721-21837@sha256_a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1.tar
	I1109 13:29:11.628525    5164 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1761985721-21837@sha256_a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1.tar
	I1109 13:29:11.628525    5164 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1109 13:29:11.631989    5164 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:29:11.642376    5164 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1109 13:29:11.642376    5164 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1109 13:29:11.709994    5164 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1109 13:29:11.710570    5164 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1109 13:29:14.629177    5164 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1109 13:29:14.629525    5164 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-164700\config.json ...
	I1109 13:29:14.630137    5164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-164700\config.json: {Name:mk59c3feef56c8ac03455de3e07f5fdf7490115c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:14.630457    5164 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1109 13:29:14.632206    5164 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.28.0/kubectl.exe
	
	
	* The control-plane node download-only-164700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-164700"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (1.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2617037s)
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (1.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-164700
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-773500 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-773500 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker: (5.7067329s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1109 13:29:24.572524   10336 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1109 13:29:24.572524   10336 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
--- PASS: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-773500
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-773500: exit status 85 (205.3734ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-164700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-164700 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-164700                                                                                                                           │ download-only-164700 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ -o=json --download-only -p download-only-773500 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker │ download-only-773500 │ minikube4\jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:18
	Running on machine: minikube4
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:18.942398   10568 out.go:360] Setting OutFile to fd 908 ...
	I1109 13:29:18.984352   10568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:18.984352   10568 out.go:374] Setting ErrFile to fd 912...
	I1109 13:29:18.984352   10568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:18.998530   10568 out.go:368] Setting JSON to true
	I1109 13:29:19.000929   10568 start.go:133] hostinfo: {"hostname":"minikube4","uptime":908,"bootTime":1762694050,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1109 13:29:19.000929   10568 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1109 13:29:19.016376   10568 out.go:99] [download-only-773500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1109 13:29:19.016376   10568 notify.go:221] Checking for updates...
	I1109 13:29:19.018572   10568 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1109 13:29:19.020842   10568 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1109 13:29:19.025289   10568 out.go:171] MINIKUBE_LOCATION=21139
	I1109 13:29:19.035023   10568 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1109 13:29:19.042859   10568 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 13:29:19.043840   10568 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:19.166699   10568 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1109 13:29:19.172322   10568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:19.397032   10568 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:69 SystemTime:2025-11-09 13:29:19.375931641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 13:29:19.399031   10568 out.go:99] Using the docker driver based on user configuration
	I1109 13:29:19.399031   10568 start.go:309] selected driver: docker
	I1109 13:29:19.399031   10568 start.go:930] validating driver "docker" against <nil>
	I1109 13:29:19.410796   10568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:19.644876   10568 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:69 SystemTime:2025-11-09 13:29:19.625205261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 13:29:19.644964   10568 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:19.682577   10568 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1109 13:29:19.683600   10568 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 13:29:19.938384   10568 out.go:171] Using Docker Desktop driver with root privileges
	I1109 13:29:19.945613   10568 cni.go:84] Creating CNI manager for ""
	I1109 13:29:19.946095   10568 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1109 13:29:19.946149   10568 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:19.946149   10568 start.go:353] cluster config:
	{Name:download-only-773500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-773500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:19.956249   10568 out.go:99] Starting "download-only-773500" primary control-plane node in "download-only-773500" cluster
	I1109 13:29:19.956249   10568 cache.go:134] Beginning downloading kic base image for docker with docker
	I1109 13:29:19.976059   10568 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:29:19.976300   10568 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1109 13:29:19.976300   10568 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:29:20.013857   10568 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1109 13:29:20.013857   10568 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:20.013857   10568 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1109 13:29:20.035159   10568 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:29:20.035159   10568 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1761985721-21837@sha256_a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1.tar
	I1109 13:29:20.035159   10568 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1761985721-21837@sha256_a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1.tar
	I1109 13:29:20.035159   10568 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1109 13:29:20.035159   10568 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1109 13:29:20.035159   10568 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1109 13:29:20.035159   10568 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1109 13:29:20.048038   10568 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1109 13:29:20.048941   10568 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1109 13:29:20.114220   10568 preload.go:295] Got checksum from GCS API "d7f0ccd752ff15c628c6fc8ef8c8033e"
	I1109 13:29:20.114906   10568 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4?checksum=md5:d7f0ccd752ff15c628c6fc8ef8c8033e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1109 13:29:23.237016   10568 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1109 13:29:23.237907   10568 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-773500\config.json ...
	I1109 13:29:23.238235   10568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-773500\config.json: {Name:mk48dc56ab6ee0e2a7c68a24a3a2c78ee1edb54a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:23.238429   10568 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1109 13:29:23.239259   10568 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.34.1/kubectl.exe
	
	
	* The control-plane node download-only-773500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-773500"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (1.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1365164s)
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (1.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-773500
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.47s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.99s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-081900 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-081900 --alsologtostderr --driver=docker: (1.1991154s)
helpers_test.go:175: Cleaning up "download-docker-081900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-081900
--- PASS: TestDownloadOnlyKic (1.99s)

                                                
                                    
x
+
TestBinaryMirror (2.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1109 13:29:29.631752   10336 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-763600 --alsologtostderr --binary-mirror http://127.0.0.1:64235 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-763600 --alsologtostderr --binary-mirror http://127.0.0.1:64235 --driver=docker: (1.6695074s)
helpers_test.go:175: Cleaning up "binary-mirror-763600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-763600
--- PASS: TestBinaryMirror (2.64s)

                                                
                                    
x
+
TestOffline (115.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-184300 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-184300 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (1m49.2907195s)
helpers_test.go:175: Cleaning up "offline-docker-184300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-184300
E1109 14:29:46.312179   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-184300: (6.3162498s)
--- PASS: TestOffline (115.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-181600
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-181600: exit status 85 (209.7748ms)

                                                
                                                
-- stdout --
	* Profile "addons-181600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-181600"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-181600
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-181600: exit status 85 (198.912ms)

                                                
                                                
-- stdout --
	* Profile "addons-181600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-181600"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/Setup (313.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-181600 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-181600 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (5m13.7985804s)
--- PASS: TestAddons/Setup (313.80s)

                                                
                                    
x
+
TestAddons/serial/Volcano (49.2s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 17.7948ms
addons_test.go:876: volcano-admission stabilized in 17.8373ms
addons_test.go:868: volcano-scheduler stabilized in 17.9024ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-z5tdw" [66ffb8c3-2c76-4bce-9227-5129f08a87d9] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0065138s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-wlbrs" [887efa1a-24c6-44b4-8483-287e7c604495] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0057852s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-8nlmh" [10549fff-da0d-4e5e-b973-d4201664b639] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0063212s
addons_test.go:903: (dbg) Run:  kubectl --context addons-181600 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-181600 create -f testdata\vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-181600 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [abe8f8ca-9207-4248-b6b6-f2ca52051969] Pending
helpers_test.go:352: "test-job-nginx-0" [abe8f8ca-9207-4248-b6b6-f2ca52051969] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [abe8f8ca-9207-4248-b6b6-f2ca52051969] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 20.0073279s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable volcano --alsologtostderr -v=1: (12.4720752s)
--- PASS: TestAddons/serial/Volcano (49.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-181600 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-181600 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-181600 create -f testdata\busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-181600 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1a400f33-780a-41ef-b935-4da5b1171916] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1a400f33-780a-41ef-b935-4da5b1171916] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.0067646s
addons_test.go:694: (dbg) Run:  kubectl --context addons-181600 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-181600 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-181600 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-181600 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.20s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 23.672ms
addons_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-181600
addons_test.go:332: (dbg) Run:  kubectl --context addons-181600 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-f4c68" [11d440ca-a2fb-4930-9d6a-a1e404b8e990] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0059835s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable inspektor-gadget --alsologtostderr -v=1: (6.1870515s)
--- PASS: TestAddons/parallel/InspektorGadget (11.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (8.05s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.126ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-qtg7d" [981853b8-8657-4914-a49d-38ec07e8f8a2] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0054088s
addons_test.go:463: (dbg) Run:  kubectl --context addons-181600 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable metrics-server --alsologtostderr -v=1: (1.8617509s)
--- PASS: TestAddons/parallel/MetricsServer (8.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1109 13:36:32.662482   10336 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1109 13:36:32.714677   10336 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1109 13:36:32.714677   10336 kapi.go:107] duration metric: took 52.1949ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 52.1949ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-181600 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-181600 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [eedce54e-5141-48ec-989e-2afe8edd9595] Pending
helpers_test.go:352: "task-pv-pod" [eedce54e-5141-48ec-989e-2afe8edd9595] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [eedce54e-5141-48ec-989e-2afe8edd9595] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.006633s
addons_test.go:572: (dbg) Run:  kubectl --context addons-181600 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-181600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-181600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-181600 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-181600 delete pod task-pv-pod: (1.1547261s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-181600 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-181600 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-181600 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [3b76d663-af82-4536-bdd8-c0b097532e8b] Pending
helpers_test.go:352: "task-pv-pod-restore" [3b76d663-af82-4536-bdd8-c0b097532e8b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [3b76d663-af82-4536-bdd8-c0b097532e8b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0061504s
addons_test.go:614: (dbg) Run:  kubectl --context addons-181600 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-181600 delete pod task-pv-pod-restore: (1.2238787s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-181600 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-181600 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable volumesnapshots --alsologtostderr -v=1: (1.4151353s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.5270064s)
--- PASS: TestAddons/parallel/CSI (53.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (36.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-181600 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-181600 --alsologtostderr -v=1: (1.752124s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-hcx8m" [de151d39-fc06-4182-a3a6-abe1e73fba50] Pending
helpers_test.go:352: "headlamp-6945c6f4d-hcx8m" [de151d39-fc06-4182-a3a6-abe1e73fba50] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-hcx8m" [de151d39-fc06-4182-a3a6-abe1e73fba50] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 27.0233049s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable headlamp --alsologtostderr -v=1: (7.3497333s)
--- PASS: TestAddons/parallel/Headlamp (36.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-l4w2c" [7796a0d0-a42c-477d-8f75-9c177c977e8b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0094134s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable cloud-spanner --alsologtostderr -v=1: (1.5344006s)
--- PASS: TestAddons/parallel/CloudSpanner (7.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-181600 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-181600 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-181600 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [93a1e16a-e407-4e67-a84a-004dfb2c06ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [93a1e16a-e407-4e67-a84a-004dfb2c06ac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [93a1e16a-e407-4e67-a84a-004dfb2c06ac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0045926s
addons_test.go:967: (dbg) Run:  kubectl --context addons-181600 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 ssh "cat /opt/local-path-provisioner/pvc-e6eea0fc-c370-49aa-9cf9-6d55c344ffdf_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-181600 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-181600 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.6989812s)
--- PASS: TestAddons/parallel/LocalPath (57.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-snqvr" [a1854e02-e511-41b2-b597-60d69bd0103c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0064288s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.86s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (13.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bqbm5" [0622038c-b651-426a-8186-f5980a1b3f04] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006773s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable yakd --alsologtostderr -v=1: (7.2169257s)
--- PASS: TestAddons/parallel/Yakd (13.23s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (7.08s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-mx25w" [a1c32326-e313-4306-81b7-c8ebf118a002] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.0291308s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.0531103s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (7.08s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-181600
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-181600: (12.2222576s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-181600
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-181600
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-181600
--- PASS: TestAddons/StoppedEnableDisable (13.08s)

                                                
                                    
x
+
TestCertOptions (58.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-824500 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-824500 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (53.0502616s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-824500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1109 14:36:11.427968   10336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-824500
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-824500 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-824500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-824500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-824500: (3.8444524s)
--- PASS: TestCertOptions (58.18s)

                                                
                                    
x
+
TestCertExpiration (280.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-340600 --memory=3072 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-340600 --memory=3072 --cert-expiration=3m --driver=docker: (48.3625467s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-340600 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-340600 --memory=3072 --cert-expiration=8760h --driver=docker: (47.2840768s)
helpers_test.go:175: Cleaning up "cert-expiration-340600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-340600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-340600: (5.0994757s)
--- PASS: TestCertExpiration (280.75s)

                                                
                                    
x
+
TestDockerFlags (58.22s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-316500 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
E1109 14:34:46.315191   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-316500 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (53.228717s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-316500 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-316500 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-316500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-316500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-316500: (3.7967063s)
--- PASS: TestDockerFlags (58.22s)

                                                
                                    
x
+
TestForceSystemdFlag (95.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-184300 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-184300 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m30.9804345s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-184300 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-184300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-184300
E1109 14:29:29.390808   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-184300: (4.0083495s)
--- PASS: TestForceSystemdFlag (95.75s)

                                                
                                    
x
+
TestForceSystemdEnv (61.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-005900 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-005900 --memory=3072 --alsologtostderr -v=5 --driver=docker: (56.7065393s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-005900 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-005900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-005900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-005900: (4.0921672s)
--- PASS: TestForceSystemdEnv (61.47s)

                                                
                                    
x
+
TestErrorSpam/start (2.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 start --dry-run
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 start --dry-run: (1.0082586s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 start --dry-run
--- PASS: TestErrorSpam/start (2.76s)

                                                
                                    
x
+
TestErrorSpam/status (2.3s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 status
--- PASS: TestErrorSpam/status (2.30s)

                                                
                                    
x
+
TestErrorSpam/pause (2.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 pause: (1.1826419s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 pause
--- PASS: TestErrorSpam/pause (2.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 unpause
--- PASS: TestErrorSpam/unpause (2.60s)

                                                
                                    
x
+
TestErrorSpam/stop (19.73s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 stop: (11.9928396s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 stop: (3.659042s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783100 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-783100 stop: (4.0779363s)
--- PASS: TestErrorSpam/stop (19.73s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\10336\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (90.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-605600 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1109 13:39:46.287804   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:46.294356   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:46.306833   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:46.328481   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:46.370385   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:46.451985   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:46.613941   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:46.935998   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:47.578075   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:48.860438   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:51.423851   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:39:56.546676   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:40:06.789901   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:40:27.272275   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-605600 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m30.5181426s)
--- PASS: TestFunctional/serial/StartWithProxy (90.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1109 13:40:37.395601   10336 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-605600 --alsologtostderr -v=8
E1109 13:41:08.234706   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-605600 --alsologtostderr -v=8: (53.7300126s)
functional_test.go:678: soft start took 53.7308951s for "functional-605600" cluster.
I1109 13:41:31.126297   10336 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (53.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-605600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 cache add registry.k8s.io/pause:3.1: (3.4776314s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 cache add registry.k8s.io/pause:3.3: (3.166649s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 cache add registry.k8s.io/pause:latest: (3.5197424s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-605600 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4218094140\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-605600 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4218094140\001: (1.5675124s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cache add minikube-local-cache-test:functional-605600
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 cache add minikube-local-cache-test:functional-605600: (2.6493014s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cache delete minikube-local-cache-test:functional-605600
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-605600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-605600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (608.708ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 cache reload: (2.7215442s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.39s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 kubectl -- --context functional-605600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-605600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.34s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (69.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-605600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1109 13:42:30.157524   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-605600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m9.9719392s)
functional_test.go:776: restart took 1m9.9719392s for "functional-605600" cluster.
I1109 13:43:02.931191   10336 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (69.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-605600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 logs: (1.7515221s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd452214659\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd452214659\001\logs.txt: (1.8312052s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-605600 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-605600
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-605600: exit status 115 (1.030383s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31292 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_2.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-605600 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-605600 config get cpus: exit status 14 (180.69ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-605600 config get cpus: exit status 14 (151.994ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-605600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-605600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (720.9076ms)

                                                
                                                
-- stdout --
	* [functional-605600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:43:16.031852   14236 out.go:360] Setting OutFile to fd 1080 ...
	I1109 13:43:16.088444   14236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:43:16.088444   14236 out.go:374] Setting ErrFile to fd 1228...
	I1109 13:43:16.088444   14236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:43:16.103887   14236 out.go:368] Setting JSON to false
	I1109 13:43:16.106065   14236 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1746,"bootTime":1762694050,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1109 13:43:16.106065   14236 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1109 13:43:16.115427   14236 out.go:179] * [functional-605600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1109 13:43:16.118198   14236 notify.go:221] Checking for updates...
	I1109 13:43:16.120019   14236 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1109 13:43:16.122249   14236 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:43:16.124610   14236 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1109 13:43:16.126247   14236 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:43:16.128239   14236 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:43:16.131803   14236 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 13:43:16.132745   14236 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:43:16.254761   14236 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1109 13:43:16.263766   14236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:43:16.535765   14236 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:84 SystemTime:2025-11-09 13:43:16.514421505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 13:43:16.538767   14236 out.go:179] * Using the docker driver based on existing profile
	I1109 13:43:16.540762   14236 start.go:309] selected driver: docker
	I1109 13:43:16.540762   14236 start.go:930] validating driver "docker" against &{Name:functional-605600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-605600 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:43:16.541760   14236 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:43:16.624761   14236 out.go:203] 
	W1109 13:43:16.626766   14236 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1109 13:43:16.628768   14236 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-605600 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-605600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-605600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (654.6099ms)

                                                
                                                
-- stdout --
	* [functional-605600] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:43:17.708175   13276 out.go:360] Setting OutFile to fd 1564 ...
	I1109 13:43:17.764934   13276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:43:17.764934   13276 out.go:374] Setting ErrFile to fd 1324...
	I1109 13:43:17.764934   13276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:43:17.779927   13276 out.go:368] Setting JSON to false
	I1109 13:43:17.782937   13276 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1747,"bootTime":1762694050,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1109 13:43:17.782937   13276 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1109 13:43:17.786926   13276 out.go:179] * [functional-605600] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1109 13:43:17.788928   13276 notify.go:221] Checking for updates...
	I1109 13:43:17.790928   13276 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1109 13:43:17.792948   13276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:43:17.794923   13276 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1109 13:43:17.797939   13276 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:43:17.798948   13276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:43:17.801935   13276 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 13:43:17.802932   13276 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:43:17.937318   13276 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1109 13:43:17.947466   13276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:43:18.191872   13276 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:84 SystemTime:2025-11-09 13:43:18.173850398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1109 13:43:18.195877   13276 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1109 13:43:18.197878   13276 start.go:309] selected driver: docker
	I1109 13:43:18.197878   13276 start.go:930] validating driver "docker" against &{Name:functional-605600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-605600 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:43:18.197878   13276 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:43:18.236879   13276 out.go:203] 
	W1109 13:43:18.238878   13276 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 13:43:18.240870   13276 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (63.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [dc9f895a-f871-4627-b00c-6742ad84a6c6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0064215s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-605600 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-605600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-605600 get pvc myclaim -o=json
I1109 13:43:24.903427   10336 retry.go:31] will retry after 2.484031335s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:67fd583f-9df1-4860-8219-746d64af41c6 ResourceVersion:814 Generation:0 CreationTimestamp:2025-11-09 13:43:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-67fd583f-9df1-4860-8219-746d64af41c6 StorageClassName:0xc001604cf0 VolumeMode:0xc001604d00 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-605600 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-605600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9fd15bdb-dc1a-443b-b9b4-3bf082ea0272] Pending
helpers_test.go:352: "sp-pod" [9fd15bdb-dc1a-443b-b9b4-3bf082ea0272] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9fd15bdb-dc1a-443b-b9b4-3bf082ea0272] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.007062s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-605600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-605600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-605600 delete -f testdata/storage-provisioner/pod.yaml: (3.276624s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-605600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2a4b3b05-9bf0-4098-b6dc-aa52118b8983] Pending
helpers_test.go:352: "sp-pod" [2a4b3b05-9bf0-4098-b6dc-aa52118b8983] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2a4b3b05-9bf0-4098-b6dc-aa52118b8983] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.0067912s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-605600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (63.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh -n functional-605600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cp functional-605600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd407401758\001\cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh -n functional-605600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh -n functional-605600 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (55.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-605600 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) Done: kubectl --context functional-605600 replace --force -f testdata\mysql.yaml: (1.6625578s)
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-nm5n7" [b2de9b74-eea0-40b4-a218-07de5938b93a] Pending
helpers_test.go:352: "mysql-5bb876957f-nm5n7" [b2de9b74-eea0-40b4-a218-07de5938b93a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-nm5n7" [b2de9b74-eea0-40b4-a218-07de5938b93a] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 39.0059315s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;": exit status 1 (197.9688ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1109 13:44:20.324159   10336 retry.go:31] will retry after 1.100057959s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;": exit status 1 (196.7351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1109 13:44:21.627169   10336 retry.go:31] will retry after 1.429874101s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;": exit status 1 (270.6472ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1109 13:44:23.334002   10336 retry.go:31] will retry after 3.363157031s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;": exit status 1 (201.518ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1109 13:44:26.905237   10336 retry.go:31] will retry after 1.892853068s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;": exit status 1 (201.2295ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1109 13:44:29.008640   10336 retry.go:31] will retry after 5.458774719s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-605600 exec mysql-5bb876957f-nm5n7 -- mysql -ppassword -e "show databases;"
E1109 13:44:46.288550   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:45:14.000701   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (55.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/10336/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo cat /etc/test/nested/copy/10336/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/10336.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo cat /etc/ssl/certs/10336.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/10336.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo cat /usr/share/ca-certificates/10336.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/103362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo cat /etc/ssl/certs/103362.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/103362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo cat /usr/share/ca-certificates/103362.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-605600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-605600 ssh "sudo systemctl is-active crio": exit status 1 (589.9324ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.537632s)
--- PASS: TestFunctional/parallel/License (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-605600 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-605600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-njxxt" [8023b2a4-1f92-4237-8d04-d28d1772ebac] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-njxxt" [8023b2a4-1f92-4237-8d04-d28d1772ebac] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.008421s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 version -o=json --components: (1.0211809s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-605600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-605600
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-605600
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-605600 image ls --format short --alsologtostderr:
I1109 13:44:22.507283    4820 out.go:360] Setting OutFile to fd 1856 ...
I1109 13:44:22.551277    4820 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:22.551277    4820 out.go:374] Setting ErrFile to fd 1612...
I1109 13:44:22.551277    4820 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:22.562272    4820 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:22.563272    4820 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:22.575281    4820 cli_runner.go:164] Run: docker container inspect functional-605600 --format={{.State.Status}}
I1109 13:44:22.634277    4820 ssh_runner.go:195] Run: systemctl --version
I1109 13:44:22.639279    4820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605600
I1109 13:44:22.693293    4820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65081 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-605600\id_rsa Username:docker}
I1109 13:44:22.845762    4820 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-605600 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ c3994bc696102 │ 88MB   │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ fc25172553d79 │ 71.9MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ docker.io/library/minikube-local-cache-test │ functional-605600 │ d8edf7d18edd8 │ 30B    │
│ docker.io/library/nginx                     │ alpine            │ d4918ca78576a │ 52.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ docker.io/kicbase/echo-server               │ functional-605600 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ c80c8dbafe7dd │ 74.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ 7dd6aaa1717ab │ 52.8MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/nginx                     │ latest            │ d261fd19cb632 │ 152MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-605600 image ls --format table --alsologtostderr:
I1109 13:44:24.228976   10152 out.go:360] Setting OutFile to fd 1880 ...
I1109 13:44:24.271981   10152 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:24.271981   10152 out.go:374] Setting ErrFile to fd 1876...
I1109 13:44:24.271981   10152 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:24.286982   10152 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:24.286982   10152 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:24.298978   10152 cli_runner.go:164] Run: docker container inspect functional-605600 --format={{.State.Status}}
I1109 13:44:24.365977   10152 ssh_runner.go:195] Run: systemctl --version
I1109 13:44:24.373984   10152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605600
I1109 13:44:24.426981   10152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65081 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-605600\id_rsa Username:docker}
I1109 13:44:24.549735   10152 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-605600 image ls --format json --alsologtostderr:
[{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52800000"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"52800000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"88000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoD
igests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d8edf7d18edd861df8b24f0f7d1c649eb067378cf3c51ee3599bf17293592b8f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-605600"],"size":"30"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"152000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-605600","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"
repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"74900000"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"71900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-605600 image ls --format json --alsologtostderr:
I1109 13:44:23.766150    5720 out.go:360] Setting OutFile to fd 1772 ...
I1109 13:44:23.812156    5720 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:23.812156    5720 out.go:374] Setting ErrFile to fd 1280...
I1109 13:44:23.812156    5720 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:23.824144    5720 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:23.824144    5720 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:23.840158    5720 cli_runner.go:164] Run: docker container inspect functional-605600 --format={{.State.Status}}
I1109 13:44:23.900149    5720 ssh_runner.go:195] Run: systemctl --version
I1109 13:44:23.906149    5720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605600
I1109 13:44:23.956159    5720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65081 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-605600\id_rsa Username:docker}
I1109 13:44:24.082275    5720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-605600 image ls --format yaml --alsologtostderr:
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52800000"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "74900000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-605600
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "71900000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "88000000"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "52800000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: d8edf7d18edd861df8b24f0f7d1c649eb067378cf3c51ee3599bf17293592b8f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-605600
size: "30"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "152000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-605600 image ls --format yaml --alsologtostderr:
I1109 13:44:23.012816   10828 out.go:360] Setting OutFile to fd 1608 ...
I1109 13:44:23.063353   10828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:23.063353   10828 out.go:374] Setting ErrFile to fd 1632...
I1109 13:44:23.063353   10828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:23.076364   10828 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:23.077361   10828 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:23.089353   10828 cli_runner.go:164] Run: docker container inspect functional-605600 --format={{.State.Status}}
I1109 13:44:23.155350   10828 ssh_runner.go:195] Run: systemctl --version
I1109 13:44:23.161363   10828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605600
I1109 13:44:23.220007   10828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65081 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-605600\id_rsa Username:docker}
I1109 13:44:23.389415   10828 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-605600 ssh pgrep buildkitd: exit status 1 (936.5515ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image build -t localhost/my-image:functional-605600 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 image build -t localhost/my-image:functional-605600 testdata\build --alsologtostderr: (3.9374363s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-605600 image build -t localhost/my-image:functional-605600 testdata\build --alsologtostderr:
I1109 13:44:24.172982   14040 out.go:360] Setting OutFile to fd 1844 ...
I1109 13:44:24.236973   14040 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:24.236973   14040 out.go:374] Setting ErrFile to fd 1848...
I1109 13:44:24.236973   14040 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:44:24.248973   14040 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:24.269973   14040 config.go:182] Loaded profile config "functional-605600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1109 13:44:24.280980   14040 cli_runner.go:164] Run: docker container inspect functional-605600 --format={{.State.Status}}
I1109 13:44:24.343976   14040 ssh_runner.go:195] Run: systemctl --version
I1109 13:44:24.352975   14040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605600
I1109 13:44:24.412986   14040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65081 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-605600\id_rsa Username:docker}
I1109 13:44:24.533104   14040 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1104498585.tar
I1109 13:44:24.543672   14040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1109 13:44:24.565981   14040 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1104498585.tar
I1109 13:44:24.574651   14040 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1104498585.tar: stat -c "%s %y" /var/lib/minikube/build/build.1104498585.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1104498585.tar': No such file or directory
I1109 13:44:24.574651   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1104498585.tar --> /var/lib/minikube/build/build.1104498585.tar (3072 bytes)
I1109 13:44:24.610308   14040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1104498585
I1109 13:44:24.630430   14040 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1104498585 -xf /var/lib/minikube/build/build.1104498585.tar
I1109 13:44:24.642435   14040 docker.go:361] Building image: /var/lib/minikube/build/build.1104498585
I1109 13:44:24.648439   14040 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-605600 /var/lib/minikube/build/build.1104498585
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:971b6c15bd0247fd8e1d63e15659983d9e6d352ff3dc82ec170975bf0580838d
#8 writing image sha256:971b6c15bd0247fd8e1d63e15659983d9e6d352ff3dc82ec170975bf0580838d done
#8 naming to localhost/my-image:functional-605600 0.0s done
#8 DONE 0.2s
I1109 13:44:27.951371   14040 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-605600 /var/lib/minikube/build/build.1104498585: (3.3029124s)
I1109 13:44:27.958973   14040 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1104498585
I1109 13:44:27.980554   14040 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1104498585.tar
I1109 13:44:27.994643   14040 build_images.go:218] Built localhost/my-image:functional-605600 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1104498585.tar
I1109 13:44:27.994759   14040 build_images.go:134] succeeded building to: functional-605600
I1109 13:44:27.994800   14040 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.5995342s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-605600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image load --daemon kicbase/echo-server:functional-605600 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 image load --daemon kicbase/echo-server:functional-605600 --alsologtostderr: (3.0195939s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "833.2511ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "222.2429ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "795.6122ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "188.8687ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image load --daemon kicbase/echo-server:functional-605600 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 image load --daemon kicbase/echo-server:functional-605600 --alsologtostderr: (2.6334735s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-605600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-605600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-605600 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-605600 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 9060: OpenProcess: The parameter is incorrect.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-605600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-605600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b57a1550-db58-4a38-ac4e-836def2d0a64] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b57a1550-db58-4a38-ac4e-836def2d0a64] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.005103s
I1109 13:43:34.825388   10336 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 service list -o json
functional_test.go:1504: Took "688.428ms" to run "out/minikube-windows-amd64.exe -p functional-605600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-605600 service --namespace=default --https --url hello-node: exit status 1 (15.030343s)

                                                
                                                
-- stdout --
	https://127.0.0.1:65326

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1532: found endpoint: https://127.0.0.1:65326
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-605600
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image load --daemon kicbase/echo-server:functional-605600 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-605600 image load --daemon kicbase/echo-server:functional-605600 --alsologtostderr: (2.7475493s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image save kicbase/echo-server:functional-605600 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image rm kicbase/echo-server:functional-605600 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-605600
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 image save --daemon kicbase/echo-server:functional-605600 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-605600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-605600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-605600"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-605600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-605600": (3.1962097s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-605600 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-605600 docker-env | Invoke-Expression ; docker images": (2.2653964s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-605600 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-605600 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 7256: OpenProcess: The parameter is incorrect.
helpers_test.go:525: unable to kill pid 11216: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-605600 service hello-node --url --format={{.IP}}: exit status 1 (15.0105285s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-605600 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-605600 service hello-node --url: exit status 1 (15.01036s)

                                                
                                                
-- stdout --
	http://127.0.0.1:65443

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1575: found endpoint for hello-node: http://127.0.0.1:65443
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-605600
--- PASS: TestFunctional/delete_echo-server_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-605600
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-605600
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (246.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1109 13:49:46.290528   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (4m4.9517857s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5: (1.6411831s)
--- PASS: TestMultiControlPlane/serial/StartCluster (246.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 kubectl -- rollout status deployment/busybox: (3.7039724s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-7qt5f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-7qt5f -- nslookup kubernetes.io: (1.0256864s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-jwssj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-lrppd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-7qt5f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-jwssj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-lrppd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-7qt5f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-jwssj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-lrppd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-7qt5f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-7qt5f -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-jwssj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-jwssj -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-lrppd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 kubectl -- exec busybox-7b57f96db7-lrppd -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 node add --alsologtostderr -v 5
E1109 13:53:12.151115   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:12.158394   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:12.170465   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:12.193364   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:12.235102   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:12.316646   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:12.479012   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:12.800994   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:13.443984   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:14.726191   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:17.288511   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:22.410478   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:32.653390   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:53:53.136143   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 node add --alsologtostderr -v 5: (55.6035722s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5: (2.0197363s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-767100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0813444s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 status --output json --alsologtostderr -v 5: (2.0125063s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp testdata\cp-test.txt ha-767100:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile33587488\001\cp-test_ha-767100.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100:/home/docker/cp-test.txt ha-767100-m02:/home/docker/cp-test_ha-767100_ha-767100-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m02 "sudo cat /home/docker/cp-test_ha-767100_ha-767100-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100:/home/docker/cp-test.txt ha-767100-m03:/home/docker/cp-test_ha-767100_ha-767100-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m03 "sudo cat /home/docker/cp-test_ha-767100_ha-767100-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100:/home/docker/cp-test.txt ha-767100-m04:/home/docker/cp-test_ha-767100_ha-767100-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test_ha-767100_ha-767100-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp testdata\cp-test.txt ha-767100-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile33587488\001\cp-test_ha-767100-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m02:/home/docker/cp-test.txt ha-767100:/home/docker/cp-test_ha-767100-m02_ha-767100.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100 "sudo cat /home/docker/cp-test_ha-767100-m02_ha-767100.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m02:/home/docker/cp-test.txt ha-767100-m03:/home/docker/cp-test_ha-767100-m02_ha-767100-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m03 "sudo cat /home/docker/cp-test_ha-767100-m02_ha-767100-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m02:/home/docker/cp-test.txt ha-767100-m04:/home/docker/cp-test_ha-767100-m02_ha-767100-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test_ha-767100-m02_ha-767100-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp testdata\cp-test.txt ha-767100-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile33587488\001\cp-test_ha-767100-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m03:/home/docker/cp-test.txt ha-767100:/home/docker/cp-test_ha-767100-m03_ha-767100.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100 "sudo cat /home/docker/cp-test_ha-767100-m03_ha-767100.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m03:/home/docker/cp-test.txt ha-767100-m02:/home/docker/cp-test_ha-767100-m03_ha-767100-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m02 "sudo cat /home/docker/cp-test_ha-767100-m03_ha-767100-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m03:/home/docker/cp-test.txt ha-767100-m04:/home/docker/cp-test_ha-767100-m03_ha-767100-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test_ha-767100-m03_ha-767100-m04.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test_ha-767100-m03_ha-767100-m04.txt": (1.1482545s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp testdata\cp-test.txt ha-767100-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 cp testdata\cp-test.txt ha-767100-m04:/home/docker/cp-test.txt: (1.0958633s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile33587488\001\cp-test_ha-767100-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m04:/home/docker/cp-test.txt ha-767100:/home/docker/cp-test_ha-767100-m04_ha-767100.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100 "sudo cat /home/docker/cp-test_ha-767100-m04_ha-767100.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m04:/home/docker/cp-test.txt ha-767100-m02:/home/docker/cp-test_ha-767100-m04_ha-767100-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m02 "sudo cat /home/docker/cp-test_ha-767100-m04_ha-767100-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 cp ha-767100-m04:/home/docker/cp-test.txt ha-767100-m03:/home/docker/cp-test_ha-767100-m04_ha-767100-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 ssh -n ha-767100-m03 "sudo cat /home/docker/cp-test_ha-767100-m04_ha-767100-m03.txt"
E1109 13:54:34.097880   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/CopyFile (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 node stop m02 --alsologtostderr -v 5: (11.8440558s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5
E1109 13:54:46.292515   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5: exit status 7 (1.6199756s)

                                                
                                                
-- stdout --
	ha-767100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767100-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767100-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:54:46.116612    9384 out.go:360] Setting OutFile to fd 1988 ...
	I1109 13:54:46.159562    9384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:46.159562    9384 out.go:374] Setting ErrFile to fd 1816...
	I1109 13:54:46.159562    9384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:46.170626    9384 out.go:368] Setting JSON to false
	I1109 13:54:46.170626    9384 mustload.go:66] Loading cluster: ha-767100
	I1109 13:54:46.170626    9384 notify.go:221] Checking for updates...
	I1109 13:54:46.171846    9384 config.go:182] Loaded profile config "ha-767100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 13:54:46.171923    9384 status.go:174] checking status of ha-767100 ...
	I1109 13:54:46.186864    9384 cli_runner.go:164] Run: docker container inspect ha-767100 --format={{.State.Status}}
	I1109 13:54:46.245386    9384 status.go:371] ha-767100 host status = "Running" (err=<nil>)
	I1109 13:54:46.245386    9384 host.go:66] Checking if "ha-767100" exists ...
	I1109 13:54:46.252025    9384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767100
	I1109 13:54:46.303504    9384 host.go:66] Checking if "ha-767100" exists ...
	I1109 13:54:46.310504    9384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:54:46.316505    9384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767100
	I1109 13:54:46.368531    9384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65512 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-767100\id_rsa Username:docker}
	I1109 13:54:46.543363    9384 ssh_runner.go:195] Run: systemctl --version
	I1109 13:54:46.564160    9384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:54:46.590573    9384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-767100
	I1109 13:54:46.647180    9384 kubeconfig.go:125] found "ha-767100" server: "https://127.0.0.1:65516"
	I1109 13:54:46.647765    9384 api_server.go:166] Checking apiserver status ...
	I1109 13:54:46.654903    9384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:54:46.684414    9384 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2297/cgroup
	I1109 13:54:46.697279    9384 api_server.go:182] apiserver freezer: "7:freezer:/docker/7b5ece2d932b07624d0feb8efb52247a60f28d6f184547f11ec26e471e117f79/kubepods/burstable/pod0985e9ed9972e427f409a4b6a15bf15d/ab8f2b5d372611e62e4313939e3eed6852926a1b51676e7f6eb7ea226657937f"
	I1109 13:54:46.704930    9384 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7b5ece2d932b07624d0feb8efb52247a60f28d6f184547f11ec26e471e117f79/kubepods/burstable/pod0985e9ed9972e427f409a4b6a15bf15d/ab8f2b5d372611e62e4313939e3eed6852926a1b51676e7f6eb7ea226657937f/freezer.state
	I1109 13:54:46.717414    9384 api_server.go:204] freezer state: "THAWED"
	I1109 13:54:46.717414    9384 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65516/healthz ...
	I1109 13:54:46.727968    9384 api_server.go:279] https://127.0.0.1:65516/healthz returned 200:
	ok
	I1109 13:54:46.727968    9384 status.go:463] ha-767100 apiserver status = Running (err=<nil>)
	I1109 13:54:46.727968    9384 status.go:176] ha-767100 status: &{Name:ha-767100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:54:46.727968    9384 status.go:174] checking status of ha-767100-m02 ...
	I1109 13:54:46.739835    9384 cli_runner.go:164] Run: docker container inspect ha-767100-m02 --format={{.State.Status}}
	I1109 13:54:46.794100    9384 status.go:371] ha-767100-m02 host status = "Stopped" (err=<nil>)
	I1109 13:54:46.794222    9384 status.go:384] host is not running, skipping remaining checks
	I1109 13:54:46.794222    9384 status.go:176] ha-767100-m02 status: &{Name:ha-767100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:54:46.794250    9384 status.go:174] checking status of ha-767100-m03 ...
	I1109 13:54:46.805780    9384 cli_runner.go:164] Run: docker container inspect ha-767100-m03 --format={{.State.Status}}
	I1109 13:54:46.859836    9384 status.go:371] ha-767100-m03 host status = "Running" (err=<nil>)
	I1109 13:54:46.859836    9384 host.go:66] Checking if "ha-767100-m03" exists ...
	I1109 13:54:46.868395    9384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767100-m03
	I1109 13:54:46.920869    9384 host.go:66] Checking if "ha-767100-m03" exists ...
	I1109 13:54:46.929171    9384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:54:46.934154    9384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767100-m03
	I1109 13:54:46.987622    9384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49255 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-767100-m03\id_rsa Username:docker}
	I1109 13:54:47.113674    9384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:54:47.138125    9384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-767100
	I1109 13:54:47.192789    9384 kubeconfig.go:125] found "ha-767100" server: "https://127.0.0.1:65516"
	I1109 13:54:47.193355    9384 api_server.go:166] Checking apiserver status ...
	I1109 13:54:47.200918    9384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:54:47.228095    9384 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2219/cgroup
	I1109 13:54:47.242081    9384 api_server.go:182] apiserver freezer: "7:freezer:/docker/ceb60059b845b52413fe5ad33b909bf4176c4ceba284b9ecc20e6aa985e1ebb2/kubepods/burstable/pod8d590eb28ebb48203c5b5f7e14ed63b1/2e74239a32c7c9acfcddc1b0eb86c006c1952b34745e3a01393c9f130c0df9d3"
	I1109 13:54:47.249646    9384 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ceb60059b845b52413fe5ad33b909bf4176c4ceba284b9ecc20e6aa985e1ebb2/kubepods/burstable/pod8d590eb28ebb48203c5b5f7e14ed63b1/2e74239a32c7c9acfcddc1b0eb86c006c1952b34745e3a01393c9f130c0df9d3/freezer.state
	I1109 13:54:47.263832    9384 api_server.go:204] freezer state: "THAWED"
	I1109 13:54:47.263875    9384 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65516/healthz ...
	I1109 13:54:47.274007    9384 api_server.go:279] https://127.0.0.1:65516/healthz returned 200:
	ok
	I1109 13:54:47.274007    9384 status.go:463] ha-767100-m03 apiserver status = Running (err=<nil>)
	I1109 13:54:47.274007    9384 status.go:176] ha-767100-m03 status: &{Name:ha-767100-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:54:47.274007    9384 status.go:174] checking status of ha-767100-m04 ...
	I1109 13:54:47.286071    9384 cli_runner.go:164] Run: docker container inspect ha-767100-m04 --format={{.State.Status}}
	I1109 13:54:47.339884    9384 status.go:371] ha-767100-m04 host status = "Running" (err=<nil>)
	I1109 13:54:47.339884    9384 host.go:66] Checking if "ha-767100-m04" exists ...
	I1109 13:54:47.347201    9384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767100-m04
	I1109 13:54:47.405861    9384 host.go:66] Checking if "ha-767100-m04" exists ...
	I1109 13:54:47.413490    9384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:54:47.418544    9384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767100-m04
	I1109 13:54:47.473956    9384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-767100-m04\id_rsa Username:docker}
	I1109 13:54:47.603170    9384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:54:47.624287    9384 status.go:176] ha-767100-m04 status: &{Name:ha-767100-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.6291611s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (104.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 node start m02 --alsologtostderr -v 5
E1109 13:55:56.020210   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:56:09.366996   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 node start m02 --alsologtostderr -v 5: (1m41.9929877s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5: (2.0275138s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (104.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0876222s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (200.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 stop --alsologtostderr -v 5: (38.7592179s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 start --wait true --alsologtostderr -v 5
E1109 13:58:12.152851   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:58:39.863925   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:59:46.294629   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 start --wait true --alsologtostderr -v 5: (2m41.7933848s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (200.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 node delete m03 --alsologtostderr -v 5: (12.5531908s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5: (1.5423687s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5940824s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 stop --alsologtostderr -v 5: (37.7074383s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5: exit status 7 (365.3169ms)

                                                
                                                
-- stdout --
	ha-767100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767100-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:00:50.299856    7256 out.go:360] Setting OutFile to fd 1608 ...
	I1109 14:00:50.347224    7256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:00:50.347224    7256 out.go:374] Setting ErrFile to fd 1672...
	I1109 14:00:50.347224    7256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:00:50.358455    7256 out.go:368] Setting JSON to false
	I1109 14:00:50.358455    7256 mustload.go:66] Loading cluster: ha-767100
	I1109 14:00:50.358455    7256 notify.go:221] Checking for updates...
	I1109 14:00:50.359088    7256 config.go:182] Loaded profile config "ha-767100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 14:00:50.359088    7256 status.go:174] checking status of ha-767100 ...
	I1109 14:00:50.371872    7256 cli_runner.go:164] Run: docker container inspect ha-767100 --format={{.State.Status}}
	I1109 14:00:50.426634    7256 status.go:371] ha-767100 host status = "Stopped" (err=<nil>)
	I1109 14:00:50.426634    7256 status.go:384] host is not running, skipping remaining checks
	I1109 14:00:50.426634    7256 status.go:176] ha-767100 status: &{Name:ha-767100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:00:50.426634    7256 status.go:174] checking status of ha-767100-m02 ...
	I1109 14:00:50.440968    7256 cli_runner.go:164] Run: docker container inspect ha-767100-m02 --format={{.State.Status}}
	I1109 14:00:50.492789    7256 status.go:371] ha-767100-m02 host status = "Stopped" (err=<nil>)
	I1109 14:00:50.492789    7256 status.go:384] host is not running, skipping remaining checks
	I1109 14:00:50.492789    7256 status.go:176] ha-767100-m02 status: &{Name:ha-767100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:00:50.492789    7256 status.go:174] checking status of ha-767100-m04 ...
	I1109 14:00:50.505419    7256 cli_runner.go:164] Run: docker container inspect ha-767100-m04 --format={{.State.Status}}
	I1109 14:00:50.558637    7256 status.go:371] ha-767100-m04 host status = "Stopped" (err=<nil>)
	I1109 14:00:50.558637    7256 status.go:384] host is not running, skipping remaining checks
	I1109 14:00:50.558637    7256 status.go:176] ha-767100-m04 status: &{Name:ha-767100-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 start --wait true --alsologtostderr -v 5 --driver=docker
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 start --wait true --alsologtostderr -v 5 --driver=docker: (1m59.0036266s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5: (1.523491s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5559954s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (103.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 node add --control-plane --alsologtostderr -v 5
E1109 14:03:12.155297   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 node add --control-plane --alsologtostderr -v 5: (1m41.4867448s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-767100 status --alsologtostderr -v 5: (1.9962558s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (103.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0526581s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (54.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-629000 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-629000 --driver=docker: (54.651775s)
--- PASS: TestImageBuild/serial/Setup (54.65s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-629000
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-629000: (4.5384684s)
--- PASS: TestImageBuild/serial/NormalBuild (4.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-629000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-629000: (2.111254s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-629000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-629000: (1.2208985s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-629000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-629000: (1.2589138s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-600300 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-600300 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m23.8969713s)
--- PASS: TestJSONOutput/start/Command (83.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.17s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-600300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-600300 --output=json --user=testUser: (1.1736891s)
--- PASS: TestJSONOutput/pause/Command (1.17s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.9s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-600300 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.90s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-600300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-600300 --output=json --user=testUser: (12.1264278s)
--- PASS: TestJSONOutput/stop/Command (12.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.68s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-326200 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-326200 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (206.7239ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f7b01cd7-c03e-428b-a857-5dec09b1f262","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-326200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb9e9bf5-2cb5-4261-9818-020ee6c8d5d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"ef7feb61-b9d3-4f50-ab9c-5eade28a8d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ae731878-edec-4a8e-a4cf-36102f193927","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"7b14568a-6520-4ab3-84f4-3c0b2ef277a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"197e1e71-4ec8-4451-886a-6e61ac58eceb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f807bbd8-e009-470e-bff6-61146bb7932f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-326200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-326200
--- PASS: TestErrorJSONOutput (0.68s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (57.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-035000 --network=
E1109 14:08:12.157652   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-035000 --network=: (53.5939787s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-035000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-035000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-035000: (3.5620875s)
--- PASS: TestKicCustomNetwork/create_custom_network (57.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (56.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-904100 --network=bridge
E1109 14:09:35.231740   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-904100 --network=bridge: (53.2661875s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-904100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-904100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-904100: (3.2957319s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (56.63s)

                                                
                                    
x
+
TestKicExistingNetwork (57.83s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1109 14:09:39.428374   10336 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1109 14:09:39.495820   10336 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1109 14:09:39.502672   10336 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1109 14:09:39.502697   10336 cli_runner.go:164] Run: docker network inspect existing-network
W1109 14:09:39.559250   10336 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1109 14:09:39.559250   10336 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1109 14:09:39.560250   10336 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1109 14:09:39.565247   10336 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1109 14:09:39.632247   10336 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000792cf0}
I1109 14:09:39.633250   10336 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1109 14:09:39.638247   10336 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1109 14:09:39.689253   10336 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1109 14:09:39.689253   10336 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1109 14:09:39.689253   10336 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1109 14:09:39.714264   10336 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1109 14:09:39.729290   10336 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014cb5c0}
I1109 14:09:39.729290   10336 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1109 14:09:39.737489   10336 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1109 14:09:39.880387   10336 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-387600 --network=existing-network
E1109 14:09:46.300239   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-387600 --network=existing-network: (53.989312s)
helpers_test.go:175: Cleaning up "existing-network-387600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-387600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-387600: (3.2450998s)
I1109 14:10:37.195223   10336 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (57.83s)

                                                
                                    
x
+
TestKicCustomSubnet (59.71s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-387400 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-387400 --subnet=192.168.60.0/24: (56.0758343s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-387400 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-387400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-387400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-387400: (3.5736654s)
--- PASS: TestKicCustomSubnet (59.71s)

                                                
                                    
x
+
TestKicStaticIP (58.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-786900 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-786900 --static-ip=192.168.200.200: (54.7823811s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-786900 ip
helpers_test.go:175: Cleaning up "static-ip-786900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-786900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-786900: (3.6161454s)
--- PASS: TestKicStaticIP (58.72s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (108.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-916200 --driver=docker
E1109 14:12:49.378169   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:13:12.160867   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-916200 --driver=docker: (49.6816768s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-916200 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-916200 --driver=docker: (48.6794091s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-916200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.2283028s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-916200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.2172877s)
helpers_test.go:175: Cleaning up "second-916200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-916200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-916200: (3.5940441s)
helpers_test.go:175: Cleaning up "first-916200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-916200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-916200: (3.8153991s)
--- PASS: TestMinikubeProfile (108.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (14.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-213800 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial956597669\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-213800 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial956597669\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (13.3540052s)
--- PASS: TestMountStart/serial/StartWithMountFirst (14.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.59s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-213800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (14.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-213800 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial956597669\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E1109 14:14:46.302902   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-213800 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial956597669\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (13.0807837s)
--- PASS: TestMountStart/serial/StartWithMountSecond (14.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-213800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.56s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-213800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-213800 --alsologtostderr -v=5: (2.4442922s)
--- PASS: TestMountStart/serial/DeleteFirst (2.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-213800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.56s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-213800
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-213800: (1.91004s)
--- PASS: TestMountStart/serial/Stop (1.91s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (11.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-213800
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-213800: (10.0407044s)
--- PASS: TestMountStart/serial/RestartStopped (11.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.58s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-213800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-359200 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-359200 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m10.1026704s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr: (1.059962s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- rollout status deployment/busybox: (3.479s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-l6xfr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-vgn5v -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-l6xfr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-vgn5v -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-l6xfr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-vgn5v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-l6xfr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-l6xfr -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-vgn5v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-359200 -- exec busybox-7b57f96db7-vgn5v -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-359200 -v=5 --alsologtostderr
E1109 14:18:12.163761   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-359200 -v=5 --alsologtostderr: (55.5032158s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr: (1.4073647s)
--- PASS: TestMultiNode/serial/AddNode (56.91s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-359200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4380452s)
--- PASS: TestMultiNode/serial/ProfileList (1.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (19.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 status --output json --alsologtostderr: (1.3571698s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp testdata\cp-test.txt multinode-359200:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2422142019\001\cp-test_multinode-359200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200:/home/docker/cp-test.txt multinode-359200-m02:/home/docker/cp-test_multinode-359200_multinode-359200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m02 "sudo cat /home/docker/cp-test_multinode-359200_multinode-359200-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200:/home/docker/cp-test.txt multinode-359200-m03:/home/docker/cp-test_multinode-359200_multinode-359200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m03 "sudo cat /home/docker/cp-test_multinode-359200_multinode-359200-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp testdata\cp-test.txt multinode-359200-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2422142019\001\cp-test_multinode-359200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200-m02:/home/docker/cp-test.txt multinode-359200:/home/docker/cp-test_multinode-359200-m02_multinode-359200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200 "sudo cat /home/docker/cp-test_multinode-359200-m02_multinode-359200.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200-m02:/home/docker/cp-test.txt multinode-359200-m03:/home/docker/cp-test_multinode-359200-m02_multinode-359200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m03 "sudo cat /home/docker/cp-test_multinode-359200-m02_multinode-359200-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp testdata\cp-test.txt multinode-359200-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2422142019\001\cp-test_multinode-359200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200-m03:/home/docker/cp-test.txt multinode-359200:/home/docker/cp-test_multinode-359200-m03_multinode-359200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200 "sudo cat /home/docker/cp-test_multinode-359200-m03_multinode-359200.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 cp multinode-359200-m03:/home/docker/cp-test.txt multinode-359200-m02:/home/docker/cp-test_multinode-359200-m03_multinode-359200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 ssh -n multinode-359200-m02 "sudo cat /home/docker/cp-test_multinode-359200-m03_multinode-359200-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (19.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 node stop m03: (1.7148187s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-359200 status: exit status 7 (1.0977284s)

                                                
                                                
-- stdout --
	multinode-359200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-359200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-359200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr: exit status 7 (1.1161552s)

                                                
                                                
-- stdout --
	multinode-359200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-359200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-359200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:18:54.990821    6384 out.go:360] Setting OutFile to fd 536 ...
	I1109 14:18:55.035790    6384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:18:55.035790    6384 out.go:374] Setting ErrFile to fd 528...
	I1109 14:18:55.035790    6384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:18:55.053597    6384 out.go:368] Setting JSON to false
	I1109 14:18:55.054202    6384 mustload.go:66] Loading cluster: multinode-359200
	I1109 14:18:55.054202    6384 notify.go:221] Checking for updates...
	I1109 14:18:55.055088    6384 config.go:182] Loaded profile config "multinode-359200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 14:18:55.055160    6384 status.go:174] checking status of multinode-359200 ...
	I1109 14:18:55.070439    6384 cli_runner.go:164] Run: docker container inspect multinode-359200 --format={{.State.Status}}
	I1109 14:18:55.131085    6384 status.go:371] multinode-359200 host status = "Running" (err=<nil>)
	I1109 14:18:55.131194    6384 host.go:66] Checking if "multinode-359200" exists ...
	I1109 14:18:55.137261    6384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359200
	I1109 14:18:55.193991    6384 host.go:66] Checking if "multinode-359200" exists ...
	I1109 14:18:55.201371    6384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:18:55.206785    6384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359200
	I1109 14:18:55.262816    6384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50627 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-359200\id_rsa Username:docker}
	I1109 14:18:55.401811    6384 ssh_runner.go:195] Run: systemctl --version
	I1109 14:18:55.421400    6384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:18:55.446702    6384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-359200
	I1109 14:18:55.501404    6384 kubeconfig.go:125] found "multinode-359200" server: "https://127.0.0.1:50626"
	I1109 14:18:55.501404    6384 api_server.go:166] Checking apiserver status ...
	I1109 14:18:55.510910    6384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:18:55.537966    6384 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2335/cgroup
	I1109 14:18:55.551262    6384 api_server.go:182] apiserver freezer: "7:freezer:/docker/b06bd26e69c93a38286b0a5f4dc0c4277488b4767c64469129fc4ab8d5d0460c/kubepods/burstable/pod0b3815f9662f7c5f170ff9c725752803/c7d43896e1f3d23989a19f2c6594493dd06af2dba6c842ea2baac4dfec1ff697"
	I1109 14:18:55.559836    6384 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b06bd26e69c93a38286b0a5f4dc0c4277488b4767c64469129fc4ab8d5d0460c/kubepods/burstable/pod0b3815f9662f7c5f170ff9c725752803/c7d43896e1f3d23989a19f2c6594493dd06af2dba6c842ea2baac4dfec1ff697/freezer.state
	I1109 14:18:55.573488    6384 api_server.go:204] freezer state: "THAWED"
	I1109 14:18:55.573488    6384 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50626/healthz ...
	I1109 14:18:55.583729    6384 api_server.go:279] https://127.0.0.1:50626/healthz returned 200:
	ok
	I1109 14:18:55.583729    6384 status.go:463] multinode-359200 apiserver status = Running (err=<nil>)
	I1109 14:18:55.583729    6384 status.go:176] multinode-359200 status: &{Name:multinode-359200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:18:55.583729    6384 status.go:174] checking status of multinode-359200-m02 ...
	I1109 14:18:55.596844    6384 cli_runner.go:164] Run: docker container inspect multinode-359200-m02 --format={{.State.Status}}
	I1109 14:18:55.649335    6384 status.go:371] multinode-359200-m02 host status = "Running" (err=<nil>)
	I1109 14:18:55.649808    6384 host.go:66] Checking if "multinode-359200-m02" exists ...
	I1109 14:18:55.656828    6384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359200-m02
	I1109 14:18:55.712592    6384 host.go:66] Checking if "multinode-359200-m02" exists ...
	I1109 14:18:55.720585    6384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:18:55.725686    6384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359200-m02
	I1109 14:18:55.781157    6384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50675 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-359200-m02\id_rsa Username:docker}
	I1109 14:18:55.913575    6384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:18:55.933210    6384 status.go:176] multinode-359200-m02 status: &{Name:multinode-359200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:18:55.933210    6384 status.go:174] checking status of multinode-359200-m03 ...
	I1109 14:18:55.945819    6384 cli_runner.go:164] Run: docker container inspect multinode-359200-m03 --format={{.State.Status}}
	I1109 14:18:55.999035    6384 status.go:371] multinode-359200-m03 host status = "Stopped" (err=<nil>)
	I1109 14:18:55.999035    6384 status.go:384] host is not running, skipping remaining checks
	I1109 14:18:55.999035    6384 status.go:176] multinode-359200-m03 status: &{Name:multinode-359200-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 node start m03 -v=5 --alsologtostderr: (11.90155s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 status -v=5 --alsologtostderr: (1.3629582s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-359200
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-359200
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-359200: (24.8127514s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-359200 --wait=true -v=5 --alsologtostderr
E1109 14:19:46.306511   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-359200 --wait=true -v=5 --alsologtostderr: (1m3.1728866s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-359200
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 node delete m03: (6.7868498s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr: (1.0390012s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 stop: (23.3342884s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-359200 status: exit status 7 (291.2057ms)

                                                
                                                
-- stdout --
	multinode-359200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-359200-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr: exit status 7 (290.1022ms)

                                                
                                                
-- stdout --
	multinode-359200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-359200-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:21:09.587839    9084 out.go:360] Setting OutFile to fd 1620 ...
	I1109 14:21:09.634788    9084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:21:09.634788    9084 out.go:374] Setting ErrFile to fd 536...
	I1109 14:21:09.634788    9084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:21:09.644900    9084 out.go:368] Setting JSON to false
	I1109 14:21:09.644900    9084 mustload.go:66] Loading cluster: multinode-359200
	I1109 14:21:09.644900    9084 notify.go:221] Checking for updates...
	I1109 14:21:09.645904    9084 config.go:182] Loaded profile config "multinode-359200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1109 14:21:09.645904    9084 status.go:174] checking status of multinode-359200 ...
	I1109 14:21:09.656900    9084 cli_runner.go:164] Run: docker container inspect multinode-359200 --format={{.State.Status}}
	I1109 14:21:09.712355    9084 status.go:371] multinode-359200 host status = "Stopped" (err=<nil>)
	I1109 14:21:09.712355    9084 status.go:384] host is not running, skipping remaining checks
	I1109 14:21:09.712355    9084 status.go:176] multinode-359200 status: &{Name:multinode-359200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:21:09.712355    9084 status.go:174] checking status of multinode-359200-m02 ...
	I1109 14:21:09.724978    9084 cli_runner.go:164] Run: docker container inspect multinode-359200-m02 --format={{.State.Status}}
	I1109 14:21:09.774891    9084 status.go:371] multinode-359200-m02 host status = "Stopped" (err=<nil>)
	I1109 14:21:09.774891    9084 status.go:384] host is not running, skipping remaining checks
	I1109 14:21:09.774891    9084 status.go:176] multinode-359200-m02 status: &{Name:multinode-359200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-359200 --wait=true -v=5 --alsologtostderr --driver=docker
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-359200 --wait=true -v=5 --alsologtostderr --driver=docker: (54.5991627s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-359200 status --alsologtostderr: (1.0384023s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-359200
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-359200-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-359200-m02 --driver=docker: exit status 14 (210.1326ms)

                                                
                                                
-- stdout --
	* [multinode-359200-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-359200-m02' is duplicated with machine name 'multinode-359200-m02' in profile 'multinode-359200'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-359200-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-359200-m03 --driver=docker: (48.8520507s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-359200
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-359200: exit status 80 (758.3956ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-359200 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-359200-m03 already exists in multinode-359200-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_59.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-359200-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-359200-m03: (3.8313727s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.81s)

                                                
                                    
x
+
TestPreload (142.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-643800 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.32.0
E1109 14:23:12.166454   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-643800 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.32.0: (1m20.7720975s)
preload_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-643800 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-643800 image pull gcr.io/k8s-minikube/busybox: (2.0938777s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-643800
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-643800: (6.7755847s)
preload_test.go:65: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-643800 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker
E1109 14:24:46.308861   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-643800 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker: (48.7401473s)
preload_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-643800 image list
helpers_test.go:175: Cleaning up "test-preload-643800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-643800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-643800: (3.6502217s)
--- PASS: TestPreload (142.52s)

                                                
                                    
x
+
TestScheduledStopWindows (116.52s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-802400 --memory=3072 --driver=docker
E1109 14:26:15.243977   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-802400 --memory=3072 --driver=docker: (50.1407851s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-802400 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-802400 -n scheduled-stop-802400
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-802400 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-802400 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-802400 --schedule 5s: (1.0374623s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-802400
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-802400: exit status 7 (238.2532ms)

                                                
                                                
-- stdout --
	scheduled-stop-802400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-802400 -n scheduled-stop-802400
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-802400 -n scheduled-stop-802400: exit status 7 (221.1861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-802400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-802400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-802400: (2.5112983s)
--- PASS: TestScheduledStopWindows (116.52s)

                                                
                                    
x
+
TestInsufficientStorage (31s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-541000 --memory=3072 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-541000 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (27.0909022s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e28b022a-e67c-4b02-814b-c6147deedc55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-541000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"135f4d34-6b41-4fab-9783-e3dfe1dba874","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"d2589812-d001-4d94-98ed-a4f7c0f1f9b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"89dd03a9-91a2-42cb-b5ba-e31f9192f2a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"f48c5cb2-c24b-4f0e-b358-fc021fe7ca2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"7a7ff576-aab1-4b1f-ae54-b8e98323f7b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27a372e1-7cdf-45bb-8ab2-a243f0558f9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e32576f7-5542-4910-844d-ab95ddbb1f58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"81c72e11-5f74-430c-a0e5-929201860744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"886cca88-9f94-4691-b961-9a0e68f56230","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"80a45fba-4710-4844-8e72-626349339ae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-541000\" primary control-plane node in \"insufficient-storage-541000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6c142ce-f293-4183-b35d-beaa0a58be30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"124ee666-55c7-4f00-a96a-1331f3e4e4a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6f675f3-7e37-4de4-80be-9ebf943fe296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-541000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-541000 --output=json --layout=cluster: exit status 7 (624.0538ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-541000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-541000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 14:27:52.890735    9208 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-541000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-541000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-541000 --output=json --layout=cluster: exit status 7 (596.1232ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-541000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-541000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 14:27:53.490294   11040 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-541000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1109 14:27:53.516000   11040 status.go:258] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-541000\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-541000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-541000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-541000: (2.6867351s)
--- PASS: TestInsufficientStorage (31.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.1113544586.exe start -p running-upgrade-969900 --memory=3072 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.1113544586.exe start -p running-upgrade-969900 --memory=3072 --vm-driver=docker: (53.8354468s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-969900 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-969900 --memory=3072 --alsologtostderr -v=1 --driver=docker: (31.9241723s)
helpers_test.go:175: Cleaning up "running-upgrade-969900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-969900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-969900: (3.8078002s)
--- PASS: TestRunningBinaryUpgrade (90.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (436.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-426600 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-426600 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (1m1.6290333s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-426600
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-426600: (17.8428311s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-426600 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-426600 status --format={{.Host}}: exit status 7 (216.9297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-426600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-426600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker: (5m0.6459964s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-426600 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-426600 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-426600 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker: exit status 106 (199.0086ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-426600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-426600
	    minikube start -p kubernetes-upgrade-426600 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4266002 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-426600 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-426600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-426600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker: (51.8083729s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-426600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-426600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-426600: (4.2688109s)
--- PASS: TestKubernetesUpgrade (436.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (247.4668ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-184300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m35.3567326s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-184300 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (21.3141262s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-184300 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-184300 status -o json: exit status 2 (751.5813ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-184300","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-184300
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-184300: (3.1468189s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (97.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.2421050099.exe start -p stopped-upgrade-592800 --memory=3072 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.2421050099.exe start -p stopped-upgrade-592800 --memory=3072 --vm-driver=docker: (51.1773521s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.2421050099.exe -p stopped-upgrade-592800 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.2421050099.exe -p stopped-upgrade-592800 stop: (8.3590539s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-592800 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-592800 --memory=3072 --alsologtostderr -v=1 --driver=docker: (38.2419833s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (97.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (54.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (54.8142232s)
--- PASS: TestNoKubernetes/serial/Start (54.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-184300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-184300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (691.1673ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.8829081s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.0899549s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-184300
no_kubernetes_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-184300: (2.0461351s)
--- PASS: TestNoKubernetes/serial/Stop (2.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --driver=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-184300 --driver=docker: (22.2375119s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-184300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-184300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (620.9918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-592800
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-592800: (1.7481391s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.75s)

                                                
                                    
x
+
TestPause/serial/Start (91.35s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-900100 --memory=3072 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-900100 --memory=3072 --install-addons=false --wait=all --driver=docker: (1m31.3486969s)
--- PASS: TestPause/serial/Start (91.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-900100 --alsologtostderr -v=1 --driver=docker
E1109 14:33:12.173545   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-900100 --alsologtostderr -v=1 --driver=docker: (58.5925915s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (58.61s)

                                                
                                    
x
+
TestPause/serial/Pause (1.12s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-900100 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-900100 --alsologtostderr -v=5: (1.1152329s)
--- PASS: TestPause/serial/Pause (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-900100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-900100 --output=json --layout=cluster: exit status 2 (676.306ms)

                                                
                                                
-- stdout --
	{"Name":"pause-900100","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-900100","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.68s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-900100 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.3s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-900100 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-900100 --alsologtostderr -v=5: (1.3028699s)
--- PASS: TestPause/serial/PauseAgain (1.30s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.07s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-900100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-900100 --alsologtostderr -v=5: (4.0650684s)
--- PASS: TestPause/serial/DeletePaused (4.07s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.86s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.6619238s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-900100
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-900100: exit status 1 (55.2ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-900100: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (74.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-571100 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-571100 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m14.7128649s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (74.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-249900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-249900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.34.1: (1m42.5367915s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-231800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-231800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1: (1m27.0075367s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-571100 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8cd38e69-a04c-4cf9-a0e4-23e3eb58fc97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8cd38e69-a04c-4cf9-a0e4-23e3eb58fc97] Running
E1109 14:38:12.177229   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 13.0074694s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-571100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (13.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (5.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-571100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-571100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.4142976s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-571100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (5.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-571100 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-571100 --alsologtostderr -v=3: (12.3276255s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-571100 -n old-k8s-version-571100
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-571100 -n old-k8s-version-571100: exit status 7 (232.2684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-571100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-571100 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-571100 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (51.7791009s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-571100 -n old-k8s-version-571100
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-231800 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f6071d24-fc6e-4345-8c47-289dda12c729] Pending
helpers_test.go:352: "busybox" [f6071d24-fc6e-4345-8c47-289dda12c729] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f6071d24-fc6e-4345-8c47-289dda12c729] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.0099397s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-231800 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-249900 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [349e51f6-a952-4d6b-ad4c-36b7b9f23ee5] Pending
helpers_test.go:352: "busybox" [349e51f6-a952-4d6b-ad4c-36b7b9f23ee5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [349e51f6-a952-4d6b-ad4c-36b7b9f23ee5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0138872s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-249900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-231800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-231800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.6465827s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-231800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-231800 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-231800 --alsologtostderr -v=3: (12.5055232s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-249900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-249900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.5403107s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-249900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-249900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-249900 --alsologtostderr -v=3: (12.2697534s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-231800 -n embed-certs-231800
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-231800 -n embed-certs-231800: exit status 7 (237.2372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-231800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-231800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-231800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1: (53.6081189s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-231800 -n embed-certs-231800
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bblxx" [c18ded8a-5b76-4c3e-9340-a05370e8a739] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0062709s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bblxx" [c18ded8a-5b76-4c3e-9340-a05370e8a739] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0905238s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-571100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-249900 -n no-preload-249900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-249900 -n no-preload-249900: exit status 7 (222.158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-249900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (67.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-249900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-249900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.34.1: (1m6.9751583s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-249900 -n no-preload-249900
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (67.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-571100 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-571100 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-571100 --alsologtostderr -v=1: (1.3153431s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-571100 -n old-k8s-version-571100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-571100 -n old-k8s-version-571100: exit status 2 (678.5966ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-571100 -n old-k8s-version-571100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-571100 -n old-k8s-version-571100: exit status 2 (675.9637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-571100 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-571100 --alsologtostderr -v=1: (1.1456166s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-571100 -n old-k8s-version-571100
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-571100 -n old-k8s-version-571100: (1.0791368s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-571100 -n old-k8s-version-571100
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-220000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-220000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1: (1m25.0284497s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t4ksm" [30291091-76fc-4549-a011-581051ad9320] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t4ksm" [30291091-76fc-4549-a011-581051ad9320] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.0064864s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t4ksm" [30291091-76fc-4549-a011-581051ad9320] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0055415s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-231800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-231800 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-231800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-231800 --alsologtostderr -v=1: (1.24168s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-231800 -n embed-certs-231800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-231800 -n embed-certs-231800: exit status 2 (679.3683ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-231800 -n embed-certs-231800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-231800 -n embed-certs-231800: exit status 2 (672.8976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-231800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-231800 --alsologtostderr -v=1: (1.0176601s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-231800 -n embed-certs-231800
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-231800 -n embed-certs-231800
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nznkx" [fca1e56b-5540-4a23-adc1-5a9b5413f916] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0048627s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-180800 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-180800 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1: (58.4134411s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nznkx" [fca1e56b-5540-4a23-adc1-5a9b5413f916] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0066294s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-249900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-249900 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-249900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-249900 --alsologtostderr -v=1: (6.0757146s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-249900 -n no-preload-249900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-249900 -n no-preload-249900: exit status 2 (663.3662ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-249900 -n no-preload-249900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-249900 -n no-preload-249900: exit status 2 (656.0271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-249900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-249900 --alsologtostderr -v=1: (1.4195849s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-249900 -n no-preload-249900
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-249900 -n no-preload-249900
--- PASS: TestStartStop/group/no-preload/serial/Pause (10.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m22.1862622s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-220000 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8b4310d5-3f05-4d6c-921b-4a38ca58db68] Pending
helpers_test.go:352: "busybox" [8b4310d5-3f05-4d6c-921b-4a38ca58db68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8b4310d5-3f05-4d6c-921b-4a38ca58db68] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.0061983s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-220000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-220000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-220000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.4987898s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-220000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-220000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-220000 --alsologtostderr -v=3: (12.3320409s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 7 (251.268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-220000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-220000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-220000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1: (49.4185685s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
I1109 14:42:34.762408   10336 config.go:182] Loaded profile config "auto-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-180800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-180800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1612321s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-180800 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-180800 --alsologtostderr -v=3: (12.406873s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-180800 -n newest-cni-180800
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-180800 -n newest-cni-180800: exit status 7 (283.5042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-180800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-180800 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-180800 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1: (26.049969s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-180800 -n newest-cni-180800
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-180800 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-180800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-180800 --alsologtostderr -v=1: (1.5369336s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-180800 -n newest-cni-180800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-180800 -n newest-cni-180800: exit status 2 (667.6299ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-180800 -n newest-cni-180800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-180800 -n newest-cni-180800: exit status 2 (646.5204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-180800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-180800 --alsologtostderr -v=1: (1.0095524s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-180800 -n newest-cni-180800
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-180800 -n newest-cni-180800
--- PASS: TestStartStop/group/newest-cni/serial/Pause (5.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-643800 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-643800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-smv94" [9dd59f04-0b01-413b-acf1-bbab45695873] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-smv94" [9dd59f04-0b01-413b-acf1-bbab45695873] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.006246s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pl5cz" [70df42bc-0391-469d-a7b2-64f6f054aed4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0065764s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m27.5542502s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pl5cz" [70df42bc-0391-469d-a7b2-64f6f054aed4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0066327s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-220000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-220000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-220000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-220000 --alsologtostderr -v=1: (4.4209981s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 2 (756.5621ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: exit status 2 (634.2696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-220000 --alsologtostderr -v=1
E1109 14:42:55.258163   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-220000 --alsologtostderr -v=1: (2.3011315s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000: (1.0786898s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-220000 -n default-k8s-diff-port-220000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-643800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (117.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
E1109 14:43:03.822776   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:03.830769   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:03.843767   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:03.865774   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:03.907637   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:03.989449   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:04.151826   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:04.474896   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:05.116796   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:06.399230   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:08.961657   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:12.179890   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-605600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:14.083818   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (1m57.7444588s)
--- PASS: TestNetworkPlugins/group/calico/Start (117.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E1109 14:43:44.809038   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m8.5047906s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-cgvtz" [f6372baf-8351-4e33-9175-233d96a9f5d0] Running
E1109 14:44:11.700764   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:11.707703   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:11.719514   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:11.741897   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:11.784504   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:11.866530   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0066347s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-643800 "pgrep -a kubelet"
E1109 14:44:12.029321   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:12.351197   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1109 14:44:12.577277   10336 config.go:182] Loaded profile config "kindnet-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-643800 replace --force -f testdata\netcat-deployment.yaml
E1109 14:44:12.993450   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q8l95" [dfb4adac-e7cb-4668-9edc-298d9eb8a13c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 14:44:14.275833   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:16.838211   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:21.960917   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-q8l95" [dfb4adac-e7cb-4668-9edc-298d9eb8a13c] Running
E1109 14:44:25.771532   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-571100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.0068273s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-643800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-643800 "pgrep -a kubelet"
I1109 14:44:34.150799   10336 config.go:182] Loaded profile config "custom-flannel-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-643800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vcppk" [31ba5aca-96f1-4a65-81e8-616afbd7bfdc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vcppk" [31ba5aca-96f1-4a65-81e8-616afbd7bfdc] Running
E1109 14:44:46.322159   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-181600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.0064203s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-643800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-z6qbj" [a99f2c5e-7512-4873-ac00-cb13133bb0ad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0101555s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-643800 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
I1109 14:45:07.301713   10336 config.go:182] Loaded profile config "calico-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m34.9200913s)
--- PASS: TestNetworkPlugins/group/false/Start (94.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (25.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-643800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xjcr6" [10bcbd7f-03ee-4187-8d1e-d3c21f415ddf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xjcr6" [10bcbd7f-03ee-4187-8d1e-d3c21f415ddf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 25.0069891s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (25.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m44.6787145s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-643800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m24.1702301s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E1109 14:46:17.248396   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:17.255107   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:17.266610   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:17.288525   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:17.330556   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:17.412808   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:17.574330   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:17.896161   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:18.538360   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:19.820399   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:22.382326   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:27.505045   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:37.746414   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m28.4398188s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-643800 "pgrep -a kubelet"
I1109 14:46:42.479834   10336 config.go:182] Loaded profile config "false-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-643800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pfx2x" [7bc2f100-5dc8-4c66-accb-32fd73488504] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pfx2x" [7bc2f100-5dc8-4c66-accb-32fd73488504] Running
E1109 14:46:55.571531   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-249900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.0073164s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-643800 exec deployment/netcat -- nslookup kubernetes.default
E1109 14:46:58.229370   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-643800 "pgrep -a kubelet"
I1109 14:47:14.187720   10336 config.go:182] Loaded profile config "enable-default-cni-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-643800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ch277" [ca48fe90-cd2e-4f46-af5d-dab60bed5fff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ch277" [ca48fe90-cd2e-4f46-af5d-dab60bed5fff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 16.0080097s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-643800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (100.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E1109 14:47:35.268603   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:35.275436   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:35.287696   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:35.309028   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:35.352014   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:35.433976   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:35.596783   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:35.918522   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:36.561817   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-643800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m40.4443263s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (100.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-wxcqq" [7f51f756-99c4-4a05-b27b-beec39e25d84] Running
E1109 14:47:37.844164   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:39.192847   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-220000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:40.406320   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0057293s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-643800 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-643800 "pgrep -a kubelet"
I1109 14:47:43.245790   10336 config.go:182] Loaded profile config "flannel-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (26.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-643800 replace --force -f testdata\netcat-deployment.yaml
I1109 14:47:43.826865   10336 config.go:182] Loaded profile config "bridge-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ck6q6" [0fd8a762-ce18-44ae-845d-d7b5b8822d98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ck6q6" [0fd8a762-ce18-44ae-845d-d7b5b8822d98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 26.0074642s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (26.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (25.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-643800 replace --force -f testdata\netcat-deployment.yaml
I1109 14:47:44.022874   10336 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1109 14:47:44.039877   10336 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:149: (dbg) Done: kubectl --context bridge-643800 replace --force -f testdata\netcat-deployment.yaml: (1.1888227s)
E1109 14:47:45.528601   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1109 14:47:45.694846   10336 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1109 14:47:46.336715   10336 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ckk2h" [4f7db1ee-c378-4d18-815a-3738a9648e50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 14:47:55.771331   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ckk2h" [4f7db1ee-c378-4d18-815a-3738a9648e50] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 22.009053s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (25.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-643800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-643800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-643800 "pgrep -a kubelet"
I1109 14:49:14.911152   10336 config.go:182] Loaded profile config "kubenet-643800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-643800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hzq5g" [900c162d-a1b0-4ec5-9280-93a279b36ed2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 14:49:16.231677   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-hzq5g" [900c162d-a1b0-4ec5-9280-93a279b36ed2] Running
E1109 14:49:26.474594   10336 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-643800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.0071328s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-643800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-643800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                    

Test skip (27/345)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (28.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.2067ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-x42m6" [0828bf02-a001-446a-bb6c-ecea816f182e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0053255s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xnbxm" [1f2d5183-d277-470a-9bfb-75cdb03a2bf8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.055647s
addons_test.go:392: (dbg) Run:  kubectl --context addons-181600 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-181600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-181600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.5115764s)
addons_test.go:407: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable registry --alsologtostderr -v=1: (1.2796791s)
--- SKIP: TestAddons/parallel/Registry (28.03s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-181600 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-181600 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-181600 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2dc5d9ff-439b-4c5d-b851-9ef4ea911146] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2dc5d9ff-439b-4c5d-b851-9ef4ea911146] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0708752s
I1109 13:36:10.926313   10336 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable ingress-dns --alsologtostderr -v=1: (2.9211319s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-181600 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-181600 addons disable ingress --alsologtostderr -v=1: (8.9177614s)
--- SKIP: TestAddons/parallel/Ingress (26.75s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-605600 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-605600 --alsologtostderr -v=1] ...
helpers_test.go:519: unable to terminate pid 1596: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-605600 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-605600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2n67n" [41f66999-4d55-4b2e-837b-8377205f8cee] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2n67n" [41f66999-4d55-4b2e-837b-8377205f8cee] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.005595s
functional_test.go:1651: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (15.30s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-734500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-734500
--- SKIP: TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-643800 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-643800" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:33:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:52180
name: cert-expiration-340600
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:35:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:51758
name: kubernetes-upgrade-426600
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:51763
name: missing-upgrade-184300
contexts:
- context:
cluster: cert-expiration-340600
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:33:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-340600
name: cert-expiration-340600
- context:
cluster: kubernetes-upgrade-426600
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:35:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-426600
name: kubernetes-upgrade-426600
- context:
cluster: missing-upgrade-184300
user: missing-upgrade-184300
name: missing-upgrade-184300
current-context: ""
kind: Config
users:
- name: cert-expiration-340600
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-340600\client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-340600\client.key
- name: kubernetes-upgrade-426600
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-426600\client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-426600\client.key
- name: missing-upgrade-184300
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\missing-upgrade-184300\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-643800

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-643800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643800"

                                                
                                                
----------------------- debugLogs end: cilium-643800 [took: 10.7972619s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-643800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-643800
--- SKIP: TestNetworkPlugins/group/cilium (11.31s)

                                                
                                    
Copied to clipboard