Test Report: Docker_macOS 18757

                    
                      76fd79497ca7607997860d279d48d970ddc3ee52:2024-04-25:34200
                    
                

Test fail (22/208)

x
+
TestOffline (759.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-438000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-438000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m38.346309644s)

                                                
                                                
-- stdout --
	* [offline-docker-438000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-438000" primary control-plane node in "offline-docker-438000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-438000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:58:57.723739   22682 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:58:57.724414   22682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:58:57.724453   22682 out.go:304] Setting ErrFile to fd 2...
	I0425 12:58:57.724463   22682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:58:57.725054   22682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:58:57.726675   22682 out.go:298] Setting JSON to false
	I0425 12:58:57.750151   22682 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":12508,"bootTime":1714062629,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 12:58:57.750256   22682 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 12:58:57.771893   22682 out.go:177] * [offline-docker-438000] minikube v1.33.0 on Darwin 14.4.1
	I0425 12:58:57.813360   22682 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 12:58:57.813384   22682 notify.go:220] Checking for updates...
	I0425 12:58:57.855492   22682 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 12:58:57.876381   22682 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 12:58:57.897496   22682 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 12:58:57.918538   22682 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 12:58:57.939307   22682 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 12:58:57.960796   22682 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 12:58:58.014492   22682 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 12:58:58.014668   22682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:58:58.192103   22682 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:false NGoroutines:185 SystemTime:2024-04-25 19:58:58.148895355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:58:58.212819   22682 out.go:177] * Using the docker driver based on user configuration
	I0425 12:58:58.234045   22682 start.go:297] selected driver: docker
	I0425 12:58:58.234076   22682 start.go:901] validating driver "docker" against <nil>
	I0425 12:58:58.234089   22682 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 12:58:58.237216   22682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:58:58.351375   22682 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:false NGoroutines:185 SystemTime:2024-04-25 19:58:58.340455241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:58:58.351559   22682 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 12:58:58.351752   22682 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:58:58.372785   22682 out.go:177] * Using Docker Desktop driver with root privileges
	I0425 12:58:58.393968   22682 cni.go:84] Creating CNI manager for ""
	I0425 12:58:58.393991   22682 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0425 12:58:58.393998   22682 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 12:58:58.394064   22682 start.go:340] cluster config:
	{Name:offline-docker-438000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:58:58.415062   22682 out.go:177] * Starting "offline-docker-438000" primary control-plane node in "offline-docker-438000" cluster
	I0425 12:58:58.457078   22682 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 12:58:58.499179   22682 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0425 12:58:58.562049   22682 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:58:58.562113   22682 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 12:58:58.562120   22682 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:58:58.562159   22682 cache.go:56] Caching tarball of preloaded images
	I0425 12:58:58.562388   22682 preload.go:173] Found /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:58:58.562421   22682 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:58:58.563664   22682 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/offline-docker-438000/config.json ...
	I0425 12:58:58.563763   22682 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/offline-docker-438000/config.json: {Name:mk9b4c31a8b1ce0dc5a97874dfd36c895d156802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:58:58.611665   22682 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0425 12:58:58.611698   22682 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0425 12:58:58.611736   22682 cache.go:194] Successfully downloaded all kic artifacts
	I0425 12:58:58.611784   22682 start.go:360] acquireMachinesLock for offline-docker-438000: {Name:mk09fe0300014736e4aeaf98aca392c71589e827 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:58:58.611949   22682 start.go:364] duration metric: took 150.404µs to acquireMachinesLock for "offline-docker-438000"
	I0425 12:58:58.611978   22682 start.go:93] Provisioning new machine with config: &{Name:offline-docker-438000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-438000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0425 12:58:58.612047   22682 start.go:125] createHost starting for "" (driver="docker")
	I0425 12:58:58.633066   22682 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0425 12:58:58.633283   22682 start.go:159] libmachine.API.Create for "offline-docker-438000" (driver="docker")
	I0425 12:58:58.633310   22682 client.go:168] LocalClient.Create starting
	I0425 12:58:58.633418   22682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 12:58:58.633465   22682 main.go:141] libmachine: Decoding PEM data...
	I0425 12:58:58.633498   22682 main.go:141] libmachine: Parsing certificate...
	I0425 12:58:58.633575   22682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 12:58:58.633613   22682 main.go:141] libmachine: Decoding PEM data...
	I0425 12:58:58.633620   22682 main.go:141] libmachine: Parsing certificate...
	I0425 12:58:58.654269   22682 cli_runner.go:164] Run: docker network inspect offline-docker-438000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 12:58:58.770482   22682 cli_runner.go:211] docker network inspect offline-docker-438000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 12:58:58.770635   22682 network_create.go:281] running [docker network inspect offline-docker-438000] to gather additional debugging logs...
	I0425 12:58:58.770666   22682 cli_runner.go:164] Run: docker network inspect offline-docker-438000
	W0425 12:58:58.821796   22682 cli_runner.go:211] docker network inspect offline-docker-438000 returned with exit code 1
	I0425 12:58:58.821825   22682 network_create.go:284] error running [docker network inspect offline-docker-438000]: docker network inspect offline-docker-438000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-438000 not found
	I0425 12:58:58.821837   22682 network_create.go:286] output of [docker network inspect offline-docker-438000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-438000 not found
	
	** /stderr **
	I0425 12:58:58.821985   22682 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:58:58.922988   22682 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:58:58.924574   22682 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:58:58.924904   22682 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022aa230}
	I0425 12:58:58.924921   22682 network_create.go:124] attempt to create docker network offline-docker-438000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0425 12:58:58.924987   22682 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-438000 offline-docker-438000
	I0425 12:58:59.011394   22682 network_create.go:108] docker network offline-docker-438000 192.168.67.0/24 created
	I0425 12:58:59.011436   22682 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-438000" container
	I0425 12:58:59.011579   22682 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 12:58:59.062259   22682 cli_runner.go:164] Run: docker volume create offline-docker-438000 --label name.minikube.sigs.k8s.io=offline-docker-438000 --label created_by.minikube.sigs.k8s.io=true
	I0425 12:58:59.113438   22682 oci.go:103] Successfully created a docker volume offline-docker-438000
	I0425 12:58:59.113550   22682 cli_runner.go:164] Run: docker run --rm --name offline-docker-438000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-438000 --entrypoint /usr/bin/test -v offline-docker-438000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 12:58:59.593267   22682 oci.go:107] Successfully prepared a docker volume offline-docker-438000
	I0425 12:58:59.593308   22682 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:58:59.593320   22682 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 12:58:59.593441   22682 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-438000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 13:04:58.634424   22682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:04:58.634560   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:04:58.686769   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:04:58.686899   22682 retry.go:31] will retry after 231.201191ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:04:58.920493   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:04:58.971514   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:04:58.971611   22682 retry.go:31] will retry after 488.870337ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:04:59.462051   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:04:59.515761   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:04:59.515870   22682 retry.go:31] will retry after 566.563174ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:00.083865   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:05:00.135389   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	W0425 13:05:00.135499   22682 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	
	W0425 13:05:00.135518   22682 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:00.135572   22682 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:05:00.135625   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:05:00.183405   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:05:00.183498   22682 retry.go:31] will retry after 153.176058ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:00.339156   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:05:00.389420   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:05:00.389512   22682 retry.go:31] will retry after 315.906903ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:00.707358   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:05:00.758292   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:05:00.758395   22682 retry.go:31] will retry after 497.430551ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:01.258256   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:05:01.309761   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:05:01.309867   22682 retry.go:31] will retry after 685.876695ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:01.998143   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:05:02.050691   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	W0425 13:05:02.050802   22682 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	
	W0425 13:05:02.050821   22682 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:02.050830   22682 start.go:128] duration metric: took 6m3.438265379s to createHost
	I0425 13:05:02.050837   22682 start.go:83] releasing machines lock for "offline-docker-438000", held for 6m3.438372107s
	W0425 13:05:02.050852   22682 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0425 13:05:02.051304   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:02.098620   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:02.098682   22682 delete.go:82] Unable to get host status for offline-docker-438000, assuming it has already been deleted: state: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	W0425 13:05:02.098772   22682 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0425 13:05:02.098782   22682 start.go:728] Will try again in 5 seconds ...
	I0425 13:05:07.100984   22682 start.go:360] acquireMachinesLock for offline-docker-438000: {Name:mk09fe0300014736e4aeaf98aca392c71589e827 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 13:05:07.101925   22682 start.go:364] duration metric: took 815.639µs to acquireMachinesLock for "offline-docker-438000"
	I0425 13:05:07.102015   22682 start.go:96] Skipping create...Using existing machine configuration
	I0425 13:05:07.102034   22682 fix.go:54] fixHost starting: 
	I0425 13:05:07.102560   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:07.154099   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:07.154147   22682 fix.go:112] recreateIfNeeded on offline-docker-438000: state= err=unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:07.154165   22682 fix.go:117] machineExists: false. err=machine does not exist
	I0425 13:05:07.175818   22682 out.go:177] * docker "offline-docker-438000" container is missing, will recreate.
	I0425 13:05:07.217472   22682 delete.go:124] DEMOLISHING offline-docker-438000 ...
	I0425 13:05:07.217620   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:07.266473   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	W0425 13:05:07.266535   22682 stop.go:83] unable to get state: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:07.266555   22682 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:07.266934   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:07.314629   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:07.314697   22682 delete.go:82] Unable to get host status for offline-docker-438000, assuming it has already been deleted: state: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:07.314784   22682 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-438000
	W0425 13:05:07.362877   22682 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-438000 returned with exit code 1
	I0425 13:05:07.362912   22682 kic.go:371] could not find the container offline-docker-438000 to remove it. will try anyways
	I0425 13:05:07.362982   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:07.410747   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	W0425 13:05:07.410801   22682 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:07.410889   22682 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-438000 /bin/bash -c "sudo init 0"
	W0425 13:05:07.459038   22682 cli_runner.go:211] docker exec --privileged -t offline-docker-438000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0425 13:05:07.459071   22682 oci.go:650] error shutdown offline-docker-438000: docker exec --privileged -t offline-docker-438000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:08.460167   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:08.511925   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:08.511985   22682 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:08.512000   22682 oci.go:664] temporary error: container offline-docker-438000 status is  but expect it to be exited
	I0425 13:05:08.512025   22682 retry.go:31] will retry after 557.683275ms: couldn't verify container is exited. %v: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:09.071403   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:09.124670   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:09.124722   22682 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:09.124736   22682 oci.go:664] temporary error: container offline-docker-438000 status is  but expect it to be exited
	I0425 13:05:09.124770   22682 retry.go:31] will retry after 1.035174455s: couldn't verify container is exited. %v: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:10.160203   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:10.252882   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:10.252923   22682 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:10.252932   22682 oci.go:664] temporary error: container offline-docker-438000 status is  but expect it to be exited
	I0425 13:05:10.252957   22682 retry.go:31] will retry after 935.846932ms: couldn't verify container is exited. %v: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:11.189963   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:11.243569   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:11.243621   22682 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:11.243631   22682 oci.go:664] temporary error: container offline-docker-438000 status is  but expect it to be exited
	I0425 13:05:11.243655   22682 retry.go:31] will retry after 1.708796488s: couldn't verify container is exited. %v: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:12.953033   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:13.006847   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:13.006901   22682 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:13.006912   22682 oci.go:664] temporary error: container offline-docker-438000 status is  but expect it to be exited
	I0425 13:05:13.006937   22682 retry.go:31] will retry after 2.565554742s: couldn't verify container is exited. %v: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:15.573504   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:15.626912   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:15.626967   22682 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:15.626976   22682 oci.go:664] temporary error: container offline-docker-438000 status is  but expect it to be exited
	I0425 13:05:15.626997   22682 retry.go:31] will retry after 4.326145202s: couldn't verify container is exited. %v: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:19.955569   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:20.008633   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:20.008681   22682 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:20.008691   22682 oci.go:664] temporary error: container offline-docker-438000 status is  but expect it to be exited
	I0425 13:05:20.008718   22682 retry.go:31] will retry after 8.039093457s: couldn't verify container is exited. %v: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:28.050077   22682 cli_runner.go:164] Run: docker container inspect offline-docker-438000 --format={{.State.Status}}
	W0425 13:05:28.104639   22682 cli_runner.go:211] docker container inspect offline-docker-438000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:28.104685   22682 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:05:28.104695   22682 oci.go:664] temporary error: container offline-docker-438000 status is  but expect it to be exited
	I0425 13:05:28.104727   22682 oci.go:88] couldn't shut down offline-docker-438000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	 
	I0425 13:05:28.104800   22682 cli_runner.go:164] Run: docker rm -f -v offline-docker-438000
	I0425 13:05:28.156153   22682 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-438000
	W0425 13:05:28.204304   22682 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-438000 returned with exit code 1
	I0425 13:05:28.204419   22682 cli_runner.go:164] Run: docker network inspect offline-docker-438000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:05:28.252592   22682 cli_runner.go:164] Run: docker network rm offline-docker-438000
	I0425 13:05:28.358749   22682 fix.go:124] Sleeping 1 second for extra luck!
	I0425 13:05:29.359548   22682 start.go:125] createHost starting for "" (driver="docker")
	I0425 13:05:29.381717   22682 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0425 13:05:29.381902   22682 start.go:159] libmachine.API.Create for "offline-docker-438000" (driver="docker")
	I0425 13:05:29.381934   22682 client.go:168] LocalClient.Create starting
	I0425 13:05:29.382152   22682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 13:05:29.382259   22682 main.go:141] libmachine: Decoding PEM data...
	I0425 13:05:29.382287   22682 main.go:141] libmachine: Parsing certificate...
	I0425 13:05:29.382374   22682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 13:05:29.382448   22682 main.go:141] libmachine: Decoding PEM data...
	I0425 13:05:29.382471   22682 main.go:141] libmachine: Parsing certificate...
	I0425 13:05:29.403945   22682 cli_runner.go:164] Run: docker network inspect offline-docker-438000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 13:05:29.457759   22682 cli_runner.go:211] docker network inspect offline-docker-438000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 13:05:29.457855   22682 network_create.go:281] running [docker network inspect offline-docker-438000] to gather additional debugging logs...
	I0425 13:05:29.457878   22682 cli_runner.go:164] Run: docker network inspect offline-docker-438000
	W0425 13:05:29.505301   22682 cli_runner.go:211] docker network inspect offline-docker-438000 returned with exit code 1
	I0425 13:05:29.505335   22682 network_create.go:284] error running [docker network inspect offline-docker-438000]: docker network inspect offline-docker-438000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-438000 not found
	I0425 13:05:29.505351   22682 network_create.go:286] output of [docker network inspect offline-docker-438000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-438000 not found
	
	** /stderr **
	I0425 13:05:29.505483   22682 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:05:29.555160   22682 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:05:29.556814   22682 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:05:29.558240   22682 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:05:29.559802   22682 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:05:29.561347   22682 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:05:29.561739   22682 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021bf6e0}
	I0425 13:05:29.561751   22682 network_create.go:124] attempt to create docker network offline-docker-438000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0425 13:05:29.561820   22682 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-438000 offline-docker-438000
	I0425 13:05:29.645853   22682 network_create.go:108] docker network offline-docker-438000 192.168.94.0/24 created
	I0425 13:05:29.645890   22682 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-438000" container
	I0425 13:05:29.646002   22682 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 13:05:29.695824   22682 cli_runner.go:164] Run: docker volume create offline-docker-438000 --label name.minikube.sigs.k8s.io=offline-docker-438000 --label created_by.minikube.sigs.k8s.io=true
	I0425 13:05:29.744298   22682 oci.go:103] Successfully created a docker volume offline-docker-438000
	I0425 13:05:29.744408   22682 cli_runner.go:164] Run: docker run --rm --name offline-docker-438000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-438000 --entrypoint /usr/bin/test -v offline-docker-438000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 13:05:29.980179   22682 oci.go:107] Successfully prepared a docker volume offline-docker-438000
	I0425 13:05:29.980208   22682 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 13:05:29.980221   22682 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 13:05:29.980329   22682 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-438000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 13:11:29.383019   22682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:11:29.383157   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:29.436323   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:29.436441   22682 retry.go:31] will retry after 185.031802ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:29.623844   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:29.674734   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:29.674845   22682 retry.go:31] will retry after 234.15859ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:29.909891   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:29.961452   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:29.961568   22682 retry.go:31] will retry after 563.730988ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:30.526166   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:30.575810   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:30.575908   22682 retry.go:31] will retry after 556.446108ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:31.134110   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:31.185734   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	W0425 13:11:31.185848   22682 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	
	W0425 13:11:31.185872   22682 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:31.185937   22682 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:11:31.186000   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:31.235146   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:31.235242   22682 retry.go:31] will retry after 136.273928ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:31.372858   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:31.424994   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:31.425091   22682 retry.go:31] will retry after 342.424129ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:31.769339   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:31.822134   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:31.822235   22682 retry.go:31] will retry after 353.540142ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:32.178162   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:32.230463   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:32.230561   22682 retry.go:31] will retry after 586.553602ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:32.819484   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:32.871791   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	W0425 13:11:32.871899   22682 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	
	W0425 13:11:32.871925   22682 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:32.871933   22682 start.go:128] duration metric: took 6m3.511806311s to createHost
	I0425 13:11:32.872004   22682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:11:32.872065   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:32.920168   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:32.920261   22682 retry.go:31] will retry after 160.887645ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:33.082594   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:33.135129   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:33.135216   22682 retry.go:31] will retry after 213.168906ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:33.350758   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:33.401635   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:33.401727   22682 retry.go:31] will retry after 376.066132ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:33.779082   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:33.832069   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:33.832168   22682 retry.go:31] will retry after 476.858428ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:34.311052   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:34.361592   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	W0425 13:11:34.361743   22682 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	
	W0425 13:11:34.361768   22682 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:34.361837   22682 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:11:34.361892   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:34.409451   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:34.409539   22682 retry.go:31] will retry after 354.348648ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:34.766243   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:34.816810   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:34.816901   22682 retry.go:31] will retry after 193.511485ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:35.011815   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:35.062076   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	I0425 13:11:35.062174   22682 retry.go:31] will retry after 746.933972ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:35.811456   22682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000
	W0425 13:11:35.860966   22682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000 returned with exit code 1
	W0425 13:11:35.861074   22682 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	
	W0425 13:11:35.861091   22682 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-438000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-438000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000
	I0425 13:11:35.861101   22682 fix.go:56] duration metric: took 6m28.758528256s for fixHost
	I0425 13:11:35.861107   22682 start.go:83] releasing machines lock for "offline-docker-438000", held for 6m28.758585971s
	W0425 13:11:35.861181   22682 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-438000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-438000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0425 13:11:35.903344   22682 out.go:177] 
	W0425 13:11:35.924601   22682 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0425 13:11:35.924655   22682 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0425 13:11:35.924682   22682 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0425 13:11:35.945357   22682 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-438000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-04-25 13:11:36.021297 -0700 PDT m=+6044.993717667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-438000
helpers_test.go:235: (dbg) docker inspect offline-docker-438000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-438000",
	        "Id": "3cdcbd6b0442521ca6547ded738f0ab8fd8c11e3f093af71c3de91af8e234549",
	        "Created": "2024-04-25T20:05:29.606688399Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-438000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-438000 -n offline-docker-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-438000 -n offline-docker-438000: exit status 7 (113.589113ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 13:11:36.185746   23521 status.go:249] status error: host: state: unknown state "offline-docker-438000": docker container inspect offline-docker-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-438000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-438000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-438000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-438000
--- FAIL: TestOffline (759.24s)

                                                
                                    
x
+
TestCertOptions (7201.435s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-830000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0425 13:24:52.181300    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 13:25:09.118734    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 13:29:03.064503    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 13:30:09.119211    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (6m36s)
	TestCertOptions (6m2s)
	TestNetworkPlugins (31m53s)

                                                
                                                
goroutine 2596 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000a0cb60, 0xc00083dbb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0000107e0, {0xe820fc0, 0x2a, 0x2a}, {0xa372aa5?, 0xbea8e19?, 0xe843d80?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000c78820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000c78820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select, 2 minutes]:
go.opencensus.io/stats/view.(*worker).start(0xc000823b80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2595 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026d4160, 0xc0026e41e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 666
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 38 [select, 2 minutes]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 37
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 667 [syscall, 6 minutes]:
syscall.syscall6(0xc0027d5f80?, 0x1000000000010?, 0x10100000019?, 0x55fee938?, 0x90?, 0xf15d5b8?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc002077a40?, 0xa2b30a5?, 0x90?, 0xd3fc140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xa3e3c45?, 0xc002077a74, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000128090)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c16b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c16b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a0d380, 0xc000c16b00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc000a0d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc000a0d380, 0xd48f470)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 183 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00091cf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 123
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 184 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00202acc0, 0xc0009087e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 123
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2187 [chan receive, 32 minutes]:
testing.(*T).Run(0xc0027881a0, {0xbe4f8e7?, 0xa750fcfc678?}, 0xc0021fc078)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0027881a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0027881a0, 0xd48f558)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2189 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027884e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027884e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0027884e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0027884e0, 0xd48f570)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 666 [syscall, 6 minutes]:
syscall.syscall6(0xc002875f80?, 0x1000000000010?, 0x10000000019?, 0x55c1ae98?, 0x90?, 0xf15d5b8?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0008378a0?, 0xa2b30a5?, 0x90?, 0xd3fc140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xa3e3c45?, 0xc0008378d4, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc00090e270)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0026d4160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0026d4160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a0d1e0, 0xc0026d4160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc000a0d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc000a0d1e0, 0xd48f478)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 745 [IO wait, 115 minutes]:
internal/poll.runtime_pollWait(0x5615e5c8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0023c8c80?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0023c8c80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0023c8c80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0023d7c20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0023d7c20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0028af1d0, {0xd4b20f0, 0xc0023d7c20})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0028af1d0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0024d0d00?, 0xc0024d11e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 742
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2188 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002788340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002788340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc002788340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc002788340, 0xd48f560)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1292 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002bdd760, 0xc002a54ae0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 854
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 187 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00202ac90, 0x2d)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xcf893a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00091cd80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00202acc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000492990, {0xd49b760, 0xc000c62cf0}, 0x1, 0xc0009087e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000492990, 0x3b9aca00, 0x0, 0x1, 0xc0009087e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 184
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 188 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd4bf240, 0xc0009087e0}, 0xc000113f50, 0xc000ca1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd4bf240, 0xc0009087e0}, 0xd?, 0xc000113f50, 0xc000113f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd4bf240?, 0xc0009087e0?}, 0xc000a0c4e0?, 0xa3e6900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa3e7865?, 0xc000a0c4e0?, 0xc00202a440?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 184
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 189 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1856 [syscall, 97 minutes]:
syscall.syscall(0x0?, 0xc002b545b8?, 0xc0025e36f0?, 0xa352f1d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc002110000?, 0x1?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1851
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2267 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027889c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027889c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027889c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027889c0, 0xc002424000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1267 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002bdc6e0, 0xc002a55c80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2274 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00212cd00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00212cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc00212cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc00212cd00, 0xd48f580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2289 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002789380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002789380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002789380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002789380, 0xc002424400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2272 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027891e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027891e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027891e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027891e0, 0xc002424380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2260 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00212cea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00212cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc00212cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc00212cea0, 0xd48f5a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2557 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c16b00, 0xc002a54060)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 667
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1330 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026e8840, 0xc0024cff20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1329
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2594 [IO wait, 6 minutes]:
internal/poll.runtime_pollWait(0x5615e2e0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00279e540?, 0xc000897200?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00279e540, {0xc000897200, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002724218, {0xc000897200?, 0x56209f38?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002874330, {0xd49a178, 0xc000cf2300})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xd49a2b8, 0xc002874330}, {0xd49a178, 0xc000cf2300}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0xd49a2b8, 0xc002874330})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xe7e2f20?, {0xd49a2b8?, 0xc002874330?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xd49a2b8, 0xc002874330}, {0xd49a238, 0xc002724218}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0001fa180?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 666
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 968 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd4bf240, 0xc0009087e0}, 0xc000cf5f50, 0xc000c9ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd4bf240, 0xc0009087e0}, 0x11?, 0xc000cf5f50, 0xc000cf5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd4bf240?, 0xc0009087e0?}, 0xc002106340?, 0xa3e6900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000cf5fd0?, 0xa42cc04?, 0xc002382f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 985
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1125 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026e9080, 0xc0025f1620)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1124
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 984 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0023f3440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 867
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2593 [IO wait, 6 minutes]:
internal/poll.runtime_pollWait(0x5615e3d8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00279e300?, 0xc00271028f?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00279e300, {0xc00271028f, 0x571, 0x571})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0027241f8, {0xc00271028f?, 0xc000784700?, 0x225?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002874300, {0xd49a178, 0xc000cf22f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xd49a2b8, 0xc002874300}, {0xd49a178, 0xc000cf22f0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000cf5678?, {0xd49a2b8, 0xc002874300})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xe7e2f20?, {0xd49a2b8?, 0xc002874300?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xd49a2b8, 0xc002874300}, {0xd49a238, 0xc0027241f8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002a545a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 666
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1367 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc000c805a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1365
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2275 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00212d380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00212d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc00212d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc00212d380, 0xd48f5a8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2266 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002788000, 0xc0021fc078)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2187
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2270 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002788ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002788ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002788ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002788ea0, 0xc002424280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2277 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00212d860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00212d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc00212d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc00212d860, 0xd48f538)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1366 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc000c805a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1365
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2555 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5615e8b0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002806060?, 0xc00225eb17?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002806060, {0xc00225eb17, 0x4e9, 0x4e9})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000cf2198, {0xc00225eb17?, 0xc002260540?, 0x22e?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0027d4120, {0xd49a178, 0xc002724320})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xd49a2b8, 0xc0027d4120}, {0xd49a178, 0xc002724320}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0025df678?, {0xd49a2b8, 0xc0027d4120})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xe7e2f20?, {0xd49a2b8?, 0xc0027d4120?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xd49a2b8, 0xc0027d4120}, {0xd49a238, 0xc000cf2198}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026e4480?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 667
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2290 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002789520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002789520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002789520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002789520, 0xc002424480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2268 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002788b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002788b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002788b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002788b60, 0xc002424180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2271 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002789040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002789040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002789040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002789040, 0xc002424300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 985 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002744ec0, 0xc0009087e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 867
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 969 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 968
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2276 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00212d6c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00212d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00212d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc00212d6c0, 0xd48f520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2269 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002788d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002788d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002788d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002788d00, 0xc002424200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2291 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000686a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027896c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027896c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027896c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027896c0, 0xc002424500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2266
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 967 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002744e90, 0x2c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xcf893a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0023f3320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002744ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c02450, {0xd49b760, 0xc000cea1b0}, 0x1, 0xc0009087e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000c02450, 0x3b9aca00, 0x0, 0x1, 0xc0009087e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 985
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2556 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5615eaa0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002806120?, 0xc0021cc063?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002806120, {0xc0021cc063, 0x39d, 0x39d})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000cf21b0, {0xc0021cc063?, 0x56032458?, 0x63?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0027d4180, {0xd49a178, 0xc002724330})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xd49a2b8, 0xc0027d4180}, {0xd49a178, 0xc002724330}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xe755860?, {0xd49a2b8, 0xc0027d4180})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xe7e2f20?, {0xd49a2b8?, 0xc0027d4180?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xd49a2b8, 0xc0027d4180}, {0xd49a238, 0xc000cf21b0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002744280?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 667
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                    
x
+
TestDockerFlags (756.77s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-224000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0425 13:14:03.020389    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 13:15:09.074929    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 13:18:46.119574    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 13:19:03.064105    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 13:20:09.118753    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 13:24:03.064031    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-224000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.478973121s)

                                                
                                                
-- stdout --
	* [docker-flags-224000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-224000" primary control-plane node in "docker-flags-224000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-224000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 13:12:12.425286   23680 out.go:291] Setting OutFile to fd 1 ...
	I0425 13:12:12.425493   23680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 13:12:12.425499   23680 out.go:304] Setting ErrFile to fd 2...
	I0425 13:12:12.425503   23680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 13:12:12.425676   23680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 13:12:12.427172   23680 out.go:298] Setting JSON to false
	I0425 13:12:12.450655   23680 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13303,"bootTime":1714062629,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 13:12:12.450738   23680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 13:12:12.472725   23680 out.go:177] * [docker-flags-224000] minikube v1.33.0 on Darwin 14.4.1
	I0425 13:12:12.514518   23680 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 13:12:12.514579   23680 notify.go:220] Checking for updates...
	I0425 13:12:12.558198   23680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 13:12:12.579562   23680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 13:12:12.601423   23680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 13:12:12.622137   23680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 13:12:12.643493   23680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 13:12:12.665361   23680 config.go:182] Loaded profile config "force-systemd-flag-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 13:12:12.665519   23680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 13:12:12.720739   23680 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 13:12:12.720915   23680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 13:12:12.829645   23680 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-25 20:12:12.818274005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 13:12:12.872155   23680 out.go:177] * Using the docker driver based on user configuration
	I0425 13:12:12.893234   23680 start.go:297] selected driver: docker
	I0425 13:12:12.893275   23680 start.go:901] validating driver "docker" against <nil>
	I0425 13:12:12.893291   23680 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 13:12:12.897680   23680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 13:12:13.003624   23680 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-25 20:12:12.991819674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 13:12:13.003821   23680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 13:12:13.004012   23680 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0425 13:12:13.025761   23680 out.go:177] * Using Docker Desktop driver with root privileges
	I0425 13:12:13.047857   23680 cni.go:84] Creating CNI manager for ""
	I0425 13:12:13.047901   23680 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0425 13:12:13.047915   23680 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 13:12:13.048004   23680 start.go:340] cluster config:
	{Name:docker-flags-224000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-224000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 13:12:13.069692   23680 out.go:177] * Starting "docker-flags-224000" primary control-plane node in "docker-flags-224000" cluster
	I0425 13:12:13.111736   23680 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 13:12:13.133729   23680 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0425 13:12:13.180203   23680 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 13:12:13.180226   23680 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 13:12:13.180278   23680 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 13:12:13.180307   23680 cache.go:56] Caching tarball of preloaded images
	I0425 13:12:13.180539   23680 preload.go:173] Found /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 13:12:13.180577   23680 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 13:12:13.181259   23680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/docker-flags-224000/config.json ...
	I0425 13:12:13.181382   23680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/docker-flags-224000/config.json: {Name:mk2eee3c9438757e3327e554f2531c74c6ea36c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 13:12:13.232038   23680 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0425 13:12:13.232056   23680 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0425 13:12:13.232085   23680 cache.go:194] Successfully downloaded all kic artifacts
	I0425 13:12:13.232158   23680 start.go:360] acquireMachinesLock for docker-flags-224000: {Name:mkfdc2cba790fab3976997701fa4dbe670b01a57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 13:12:13.232316   23680 start.go:364] duration metric: took 145.559µs to acquireMachinesLock for "docker-flags-224000"
	I0425 13:12:13.232347   23680 start.go:93] Provisioning new machine with config: &{Name:docker-flags-224000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-224000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0425 13:12:13.232435   23680 start.go:125] createHost starting for "" (driver="docker")
	I0425 13:12:13.274014   23680 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0425 13:12:13.274400   23680 start.go:159] libmachine.API.Create for "docker-flags-224000" (driver="docker")
	I0425 13:12:13.274446   23680 client.go:168] LocalClient.Create starting
	I0425 13:12:13.274650   23680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 13:12:13.274743   23680 main.go:141] libmachine: Decoding PEM data...
	I0425 13:12:13.274776   23680 main.go:141] libmachine: Parsing certificate...
	I0425 13:12:13.274886   23680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 13:12:13.274963   23680 main.go:141] libmachine: Decoding PEM data...
	I0425 13:12:13.274977   23680 main.go:141] libmachine: Parsing certificate...
	I0425 13:12:13.275871   23680 cli_runner.go:164] Run: docker network inspect docker-flags-224000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 13:12:13.324931   23680 cli_runner.go:211] docker network inspect docker-flags-224000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 13:12:13.325029   23680 network_create.go:281] running [docker network inspect docker-flags-224000] to gather additional debugging logs...
	I0425 13:12:13.325049   23680 cli_runner.go:164] Run: docker network inspect docker-flags-224000
	W0425 13:12:13.373020   23680 cli_runner.go:211] docker network inspect docker-flags-224000 returned with exit code 1
	I0425 13:12:13.373046   23680 network_create.go:284] error running [docker network inspect docker-flags-224000]: docker network inspect docker-flags-224000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-224000 not found
	I0425 13:12:13.373072   23680 network_create.go:286] output of [docker network inspect docker-flags-224000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-224000 not found
	
	** /stderr **
	I0425 13:12:13.373195   23680 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:12:13.423237   23680 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:12:13.424611   23680 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:12:13.426168   23680 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:12:13.426565   23680 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00227c200}
	I0425 13:12:13.426613   23680 network_create.go:124] attempt to create docker network docker-flags-224000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0425 13:12:13.426684   23680 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-224000 docker-flags-224000
	W0425 13:12:13.474830   23680 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-224000 docker-flags-224000 returned with exit code 1
	W0425 13:12:13.474862   23680 network_create.go:149] failed to create docker network docker-flags-224000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-224000 docker-flags-224000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0425 13:12:13.474883   23680 network_create.go:116] failed to create docker network docker-flags-224000 192.168.76.0/24, will retry: subnet is taken
	I0425 13:12:13.476222   23680 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:12:13.476570   23680 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022a5310}
	I0425 13:12:13.476582   23680 network_create.go:124] attempt to create docker network docker-flags-224000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0425 13:12:13.476652   23680 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-224000 docker-flags-224000
	I0425 13:12:13.561585   23680 network_create.go:108] docker network docker-flags-224000 192.168.85.0/24 created
	I0425 13:12:13.561627   23680 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-224000" container
	I0425 13:12:13.561743   23680 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 13:12:13.612339   23680 cli_runner.go:164] Run: docker volume create docker-flags-224000 --label name.minikube.sigs.k8s.io=docker-flags-224000 --label created_by.minikube.sigs.k8s.io=true
	I0425 13:12:13.661657   23680 oci.go:103] Successfully created a docker volume docker-flags-224000
	I0425 13:12:13.661782   23680 cli_runner.go:164] Run: docker run --rm --name docker-flags-224000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-224000 --entrypoint /usr/bin/test -v docker-flags-224000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 13:12:13.985939   23680 oci.go:107] Successfully prepared a docker volume docker-flags-224000
	I0425 13:12:13.985996   23680 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 13:12:13.986011   23680 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 13:12:13.986105   23680 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-224000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 13:18:13.320059   23680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:18:13.320201   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:18:13.371233   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:18:13.371343   23680 retry.go:31] will retry after 223.916819ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:13.597546   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:18:13.650106   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:18:13.650219   23680 retry.go:31] will retry after 285.778387ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:13.936571   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:18:13.988016   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:18:13.988128   23680 retry.go:31] will retry after 413.364001ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:14.403877   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:18:14.454971   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	W0425 13:18:14.455086   23680 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	
	W0425 13:18:14.455110   23680 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:14.455165   23680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:18:14.455227   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:18:14.503579   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:18:14.503674   23680 retry.go:31] will retry after 370.349308ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:14.876358   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:18:14.927624   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:18:14.927709   23680 retry.go:31] will retry after 396.33634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:15.326416   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:18:15.377395   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:18:15.377487   23680 retry.go:31] will retry after 453.215737ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:15.832101   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:18:15.885285   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	W0425 13:18:15.885380   23680 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	
	W0425 13:18:15.885398   23680 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:15.885422   23680 start.go:128] duration metric: took 6m2.607774836s to createHost
	I0425 13:18:15.885429   23680 start.go:83] releasing machines lock for "docker-flags-224000", held for 6m2.607908963s
	W0425 13:18:15.885446   23680 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0425 13:18:15.885914   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:15.933485   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:15.933532   23680 delete.go:82] Unable to get host status for docker-flags-224000, assuming it has already been deleted: state: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	W0425 13:18:15.933605   23680 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0425 13:18:15.933615   23680 start.go:728] Will try again in 5 seconds ...
	I0425 13:18:20.935796   23680 start.go:360] acquireMachinesLock for docker-flags-224000: {Name:mkfdc2cba790fab3976997701fa4dbe670b01a57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 13:18:20.936022   23680 start.go:364] duration metric: took 172.145µs to acquireMachinesLock for "docker-flags-224000"
	I0425 13:18:20.936057   23680 start.go:96] Skipping create...Using existing machine configuration
	I0425 13:18:20.936076   23680 fix.go:54] fixHost starting: 
	I0425 13:18:20.936538   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:20.991466   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:20.991532   23680 fix.go:112] recreateIfNeeded on docker-flags-224000: state= err=unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:20.991561   23680 fix.go:117] machineExists: false. err=machine does not exist
	I0425 13:18:21.013112   23680 out.go:177] * docker "docker-flags-224000" container is missing, will recreate.
	I0425 13:18:21.054811   23680 delete.go:124] DEMOLISHING docker-flags-224000 ...
	I0425 13:18:21.055008   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:21.103789   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	W0425 13:18:21.103839   23680 stop.go:83] unable to get state: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:21.103855   23680 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:21.104238   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:21.152172   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:21.152221   23680 delete.go:82] Unable to get host status for docker-flags-224000, assuming it has already been deleted: state: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:21.152297   23680 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-224000
	W0425 13:18:21.199982   23680 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-224000 returned with exit code 1
	I0425 13:18:21.200015   23680 kic.go:371] could not find the container docker-flags-224000 to remove it. will try anyways
	I0425 13:18:21.200092   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:21.247139   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	W0425 13:18:21.247182   23680 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:21.247261   23680 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-224000 /bin/bash -c "sudo init 0"
	W0425 13:18:21.295082   23680 cli_runner.go:211] docker exec --privileged -t docker-flags-224000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0425 13:18:21.295114   23680 oci.go:650] error shutdown docker-flags-224000: docker exec --privileged -t docker-flags-224000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:22.296416   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:22.346308   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:22.346354   23680 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:22.346364   23680 oci.go:664] temporary error: container docker-flags-224000 status is  but expect it to be exited
	I0425 13:18:22.346390   23680 retry.go:31] will retry after 681.823284ms: couldn't verify container is exited. %v: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:23.029131   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:23.079083   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:23.079141   23680 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:23.079157   23680 oci.go:664] temporary error: container docker-flags-224000 status is  but expect it to be exited
	I0425 13:18:23.079176   23680 retry.go:31] will retry after 1.120061001s: couldn't verify container is exited. %v: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:24.199855   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:24.251073   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:24.251130   23680 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:24.251151   23680 oci.go:664] temporary error: container docker-flags-224000 status is  but expect it to be exited
	I0425 13:18:24.251173   23680 retry.go:31] will retry after 799.339895ms: couldn't verify container is exited. %v: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:25.052853   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:25.107781   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:25.107827   23680 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:25.107843   23680 oci.go:664] temporary error: container docker-flags-224000 status is  but expect it to be exited
	I0425 13:18:25.107869   23680 retry.go:31] will retry after 1.493178486s: couldn't verify container is exited. %v: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:26.602576   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:26.653348   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:26.653412   23680 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:26.653425   23680 oci.go:664] temporary error: container docker-flags-224000 status is  but expect it to be exited
	I0425 13:18:26.653447   23680 retry.go:31] will retry after 2.540864448s: couldn't verify container is exited. %v: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:29.196654   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:29.248954   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:29.249001   23680 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:29.249012   23680 oci.go:664] temporary error: container docker-flags-224000 status is  but expect it to be exited
	I0425 13:18:29.249038   23680 retry.go:31] will retry after 4.393983959s: couldn't verify container is exited. %v: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:33.644434   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:33.697121   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:33.697166   23680 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:33.697180   23680 oci.go:664] temporary error: container docker-flags-224000 status is  but expect it to be exited
	I0425 13:18:33.697206   23680 retry.go:31] will retry after 6.813724283s: couldn't verify container is exited. %v: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:40.511582   23680 cli_runner.go:164] Run: docker container inspect docker-flags-224000 --format={{.State.Status}}
	W0425 13:18:40.564785   23680 cli_runner.go:211] docker container inspect docker-flags-224000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:40.564836   23680 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:18:40.564851   23680 oci.go:664] temporary error: container docker-flags-224000 status is  but expect it to be exited
	I0425 13:18:40.564886   23680 oci.go:88] couldn't shut down docker-flags-224000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	 
	I0425 13:18:40.564959   23680 cli_runner.go:164] Run: docker rm -f -v docker-flags-224000
	I0425 13:18:40.615472   23680 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-224000
	W0425 13:18:40.662964   23680 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-224000 returned with exit code 1
	I0425 13:18:40.663073   23680 cli_runner.go:164] Run: docker network inspect docker-flags-224000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:18:40.711282   23680 cli_runner.go:164] Run: docker network rm docker-flags-224000
	I0425 13:18:40.811516   23680 fix.go:124] Sleeping 1 second for extra luck!
	I0425 13:18:41.813678   23680 start.go:125] createHost starting for "" (driver="docker")
	I0425 13:18:41.835669   23680 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0425 13:18:41.835826   23680 start.go:159] libmachine.API.Create for "docker-flags-224000" (driver="docker")
	I0425 13:18:41.835856   23680 client.go:168] LocalClient.Create starting
	I0425 13:18:41.836113   23680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 13:18:41.836213   23680 main.go:141] libmachine: Decoding PEM data...
	I0425 13:18:41.836239   23680 main.go:141] libmachine: Parsing certificate...
	I0425 13:18:41.836323   23680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 13:18:41.836395   23680 main.go:141] libmachine: Decoding PEM data...
	I0425 13:18:41.836411   23680 main.go:141] libmachine: Parsing certificate...
	I0425 13:18:41.837091   23680 cli_runner.go:164] Run: docker network inspect docker-flags-224000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 13:18:41.889902   23680 cli_runner.go:211] docker network inspect docker-flags-224000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 13:18:41.889995   23680 network_create.go:281] running [docker network inspect docker-flags-224000] to gather additional debugging logs...
	I0425 13:18:41.890015   23680 cli_runner.go:164] Run: docker network inspect docker-flags-224000
	W0425 13:18:41.938215   23680 cli_runner.go:211] docker network inspect docker-flags-224000 returned with exit code 1
	I0425 13:18:41.938246   23680 network_create.go:284] error running [docker network inspect docker-flags-224000]: docker network inspect docker-flags-224000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-224000 not found
	I0425 13:18:41.938260   23680 network_create.go:286] output of [docker network inspect docker-flags-224000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-224000 not found
	
	** /stderr **
	I0425 13:18:41.938391   23680 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:18:41.988250   23680 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:41.989807   23680 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:41.991333   23680 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:41.992877   23680 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:41.994463   23680 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:41.996020   23680 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:41.996330   23680 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00216c410}
	I0425 13:18:41.996345   23680 network_create.go:124] attempt to create docker network docker-flags-224000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0425 13:18:41.996417   23680 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-224000 docker-flags-224000
	I0425 13:18:42.080715   23680 network_create.go:108] docker network docker-flags-224000 192.168.103.0/24 created
	I0425 13:18:42.080756   23680 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-224000" container
	I0425 13:18:42.080868   23680 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 13:18:42.132127   23680 cli_runner.go:164] Run: docker volume create docker-flags-224000 --label name.minikube.sigs.k8s.io=docker-flags-224000 --label created_by.minikube.sigs.k8s.io=true
	I0425 13:18:42.180089   23680 oci.go:103] Successfully created a docker volume docker-flags-224000
	I0425 13:18:42.180199   23680 cli_runner.go:164] Run: docker run --rm --name docker-flags-224000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-224000 --entrypoint /usr/bin/test -v docker-flags-224000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 13:18:42.425944   23680 oci.go:107] Successfully prepared a docker volume docker-flags-224000
	I0425 13:18:42.425979   23680 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 13:18:42.425992   23680 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 13:18:42.426093   23680 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-224000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 13:24:41.837774   23680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:24:41.837935   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:41.890621   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:41.890733   23680 retry.go:31] will retry after 258.598801ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:42.151682   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:42.203355   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:42.203467   23680 retry.go:31] will retry after 236.807607ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:42.440847   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:42.498159   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:42.498269   23680 retry.go:31] will retry after 718.739152ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:43.218236   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:43.269815   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	W0425 13:24:43.269914   23680 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	
	W0425 13:24:43.269940   23680 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:43.270001   23680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:24:43.270061   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:43.317782   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:43.317884   23680 retry.go:31] will retry after 278.595978ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:43.597298   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:43.649542   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:43.649637   23680 retry.go:31] will retry after 501.570021ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:44.153627   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:44.204904   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:44.205001   23680 retry.go:31] will retry after 735.757963ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:44.941736   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:44.994132   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	W0425 13:24:44.994259   23680 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	
	W0425 13:24:44.994281   23680 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:44.994293   23680 start.go:128] duration metric: took 6m3.181061204s to createHost
	I0425 13:24:44.994364   23680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:24:44.994415   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:45.042282   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:45.042371   23680 retry.go:31] will retry after 212.744436ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:45.255691   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:45.305514   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:45.305606   23680 retry.go:31] will retry after 242.20694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:45.550184   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:45.601115   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:45.601205   23680 retry.go:31] will retry after 677.418157ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:46.281041   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:46.333271   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	W0425 13:24:46.333365   23680 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	
	W0425 13:24:46.333394   23680 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:46.333451   23680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:24:46.333503   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:46.384031   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:46.384133   23680 retry.go:31] will retry after 240.496515ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:46.627030   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:46.677528   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:46.677620   23680 retry.go:31] will retry after 542.029116ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:47.221266   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:47.272552   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	I0425 13:24:47.272643   23680 retry.go:31] will retry after 434.458158ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:47.709496   23680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000
	W0425 13:24:47.762116   23680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000 returned with exit code 1
	W0425 13:24:47.762216   23680 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	
	W0425 13:24:47.762235   23680 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-224000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-224000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	I0425 13:24:47.762248   23680 fix.go:56] duration metric: took 6m26.826679763s for fixHost
	I0425 13:24:47.762255   23680 start.go:83] releasing machines lock for "docker-flags-224000", held for 6m26.826723395s
	W0425 13:24:47.762326   23680 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-224000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-224000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0425 13:24:47.784057   23680 out.go:177] 
	W0425 13:24:47.805800   23680 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0425 13:24:47.805863   23680 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0425 13:24:47.805887   23680 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0425 13:24:47.826696   23680 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-224000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-224000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-224000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (198.51258ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-224000 host status: state: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-224000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-224000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-224000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (198.493125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-224000 host status: state: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-224000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-224000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-25 13:24:48.298541 -0700 PDT m=+6837.226222836
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-224000
helpers_test.go:235: (dbg) docker inspect docker-flags-224000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-224000",
	        "Id": "e7d43c5baa09d9e09d772076575a3391d6d2219e7746f4e3b854bb6399a4735e",
	        "Created": "2024-04-25T20:18:42.041843834Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-224000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-224000 -n docker-flags-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-224000 -n docker-flags-224000: exit status 7 (112.651291ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 13:24:48.461271   24417 status.go:249] status error: host: state: unknown state "docker-flags-224000": docker container inspect docker-flags-224000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-224000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-224000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-224000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-224000
--- FAIL: TestDockerFlags (756.77s)

                                                
                                    
x
+
TestForceSystemdFlag (758.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-359000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-359000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m37.067490762s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-359000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-359000" primary control-plane node in "force-systemd-flag-359000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-359000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 13:11:36.963558   23548 out.go:291] Setting OutFile to fd 1 ...
	I0425 13:11:36.963741   23548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 13:11:36.963746   23548 out.go:304] Setting ErrFile to fd 2...
	I0425 13:11:36.963750   23548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 13:11:36.963941   23548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 13:11:36.965352   23548 out.go:298] Setting JSON to false
	I0425 13:11:36.987628   23548 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13267,"bootTime":1714062629,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 13:11:36.987724   23548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 13:11:37.009773   23548 out.go:177] * [force-systemd-flag-359000] minikube v1.33.0 on Darwin 14.4.1
	I0425 13:11:37.051517   23548 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 13:11:37.051577   23548 notify.go:220] Checking for updates...
	I0425 13:11:37.094371   23548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 13:11:37.115400   23548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 13:11:37.136372   23548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 13:11:37.178454   23548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 13:11:37.199470   23548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 13:11:37.221039   23548 config.go:182] Loaded profile config "force-systemd-env-593000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 13:11:37.221167   23548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 13:11:37.275963   23548 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 13:11:37.276124   23548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 13:11:37.384567   23548 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:225 SystemTime:2024-04-25 20:11:37.37396029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 13:11:37.406301   23548 out.go:177] * Using the docker driver based on user configuration
	I0425 13:11:37.427155   23548 start.go:297] selected driver: docker
	I0425 13:11:37.427192   23548 start.go:901] validating driver "docker" against <nil>
	I0425 13:11:37.427207   23548 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 13:11:37.431641   23548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 13:11:37.540494   23548 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:225 SystemTime:2024-04-25 20:11:37.529813549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 13:11:37.540694   23548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 13:11:37.540864   23548 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 13:11:37.562343   23548 out.go:177] * Using Docker Desktop driver with root privileges
	I0425 13:11:37.584009   23548 cni.go:84] Creating CNI manager for ""
	I0425 13:11:37.584053   23548 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0425 13:11:37.584066   23548 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 13:11:37.584151   23548 start.go:340] cluster config:
	{Name:force-systemd-flag-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 13:11:37.604967   23548 out.go:177] * Starting "force-systemd-flag-359000" primary control-plane node in "force-systemd-flag-359000" cluster
	I0425 13:11:37.647081   23548 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 13:11:37.668109   23548 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0425 13:11:37.710083   23548 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 13:11:37.710164   23548 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 13:11:37.710184   23548 cache.go:56] Caching tarball of preloaded images
	I0425 13:11:37.710189   23548 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 13:11:37.710418   23548 preload.go:173] Found /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 13:11:37.710440   23548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 13:11:37.710552   23548 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/force-systemd-flag-359000/config.json ...
	I0425 13:11:37.710589   23548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/force-systemd-flag-359000/config.json: {Name:mk044bef584dfd921fadd26746d254cb67c2bde8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 13:11:37.759445   23548 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0425 13:11:37.759469   23548 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0425 13:11:37.759500   23548 cache.go:194] Successfully downloaded all kic artifacts
	I0425 13:11:37.759545   23548 start.go:360] acquireMachinesLock for force-systemd-flag-359000: {Name:mke352dd264bd381f827c10e02c8f73351b9810c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 13:11:37.759720   23548 start.go:364] duration metric: took 163.063µs to acquireMachinesLock for "force-systemd-flag-359000"
	I0425 13:11:37.759750   23548 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-359000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0425 13:11:37.759837   23548 start.go:125] createHost starting for "" (driver="docker")
	I0425 13:11:37.801933   23548 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0425 13:11:37.802290   23548 start.go:159] libmachine.API.Create for "force-systemd-flag-359000" (driver="docker")
	I0425 13:11:37.802339   23548 client.go:168] LocalClient.Create starting
	I0425 13:11:37.802545   23548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 13:11:37.802644   23548 main.go:141] libmachine: Decoding PEM data...
	I0425 13:11:37.802676   23548 main.go:141] libmachine: Parsing certificate...
	I0425 13:11:37.802781   23548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 13:11:37.802856   23548 main.go:141] libmachine: Decoding PEM data...
	I0425 13:11:37.802870   23548 main.go:141] libmachine: Parsing certificate...
	I0425 13:11:37.803760   23548 cli_runner.go:164] Run: docker network inspect force-systemd-flag-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 13:11:37.852539   23548 cli_runner.go:211] docker network inspect force-systemd-flag-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 13:11:37.852655   23548 network_create.go:281] running [docker network inspect force-systemd-flag-359000] to gather additional debugging logs...
	I0425 13:11:37.852669   23548 cli_runner.go:164] Run: docker network inspect force-systemd-flag-359000
	W0425 13:11:37.901196   23548 cli_runner.go:211] docker network inspect force-systemd-flag-359000 returned with exit code 1
	I0425 13:11:37.901226   23548 network_create.go:284] error running [docker network inspect force-systemd-flag-359000]: docker network inspect force-systemd-flag-359000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-359000 not found
	I0425 13:11:37.901236   23548 network_create.go:286] output of [docker network inspect force-systemd-flag-359000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-359000 not found
	
	** /stderr **
	I0425 13:11:37.901376   23548 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:11:37.951149   23548 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:11:37.952730   23548 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:11:37.953103   23548 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024125a0}
	I0425 13:11:37.953142   23548 network_create.go:124] attempt to create docker network force-systemd-flag-359000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0425 13:11:37.953230   23548 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-359000 force-systemd-flag-359000
	I0425 13:11:38.037178   23548 network_create.go:108] docker network force-systemd-flag-359000 192.168.67.0/24 created
	I0425 13:11:38.037217   23548 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-359000" container
	I0425 13:11:38.037333   23548 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 13:11:38.087934   23548 cli_runner.go:164] Run: docker volume create force-systemd-flag-359000 --label name.minikube.sigs.k8s.io=force-systemd-flag-359000 --label created_by.minikube.sigs.k8s.io=true
	I0425 13:11:38.137147   23548 oci.go:103] Successfully created a docker volume force-systemd-flag-359000
	I0425 13:11:38.137268   23548 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-359000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-359000 --entrypoint /usr/bin/test -v force-systemd-flag-359000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 13:11:38.460317   23548 oci.go:107] Successfully prepared a docker volume force-systemd-flag-359000
	I0425 13:11:38.460364   23548 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 13:11:38.460378   23548 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 13:11:38.460498   23548 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-359000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 13:17:37.849968   23548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:17:37.850117   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:37.900629   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:37.900761   23548 retry.go:31] will retry after 231.645668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:38.134828   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:38.186758   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:38.186869   23548 retry.go:31] will retry after 200.977835ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:38.390278   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:38.443344   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:38.443463   23548 retry.go:31] will retry after 746.237131ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:39.191258   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:39.243344   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:39.243433   23548 retry.go:31] will retry after 475.514296ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:39.721366   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:39.774443   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	W0425 13:17:39.774546   23548 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	
	W0425 13:17:39.774572   23548 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:39.774636   23548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:17:39.774704   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:39.823255   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:39.823344   23548 retry.go:31] will retry after 345.289618ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:40.170998   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:40.221607   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:40.221697   23548 retry.go:31] will retry after 416.534566ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:40.640602   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:40.692444   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:40.692541   23548 retry.go:31] will retry after 300.605887ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:40.995534   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:41.048995   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:41.049088   23548 retry.go:31] will retry after 541.615982ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:41.591245   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:17:41.644113   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	W0425 13:17:41.644225   23548 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	
	W0425 13:17:41.644244   23548 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:41.644263   23548 start.go:128] duration metric: took 6m3.839156655s to createHost
	I0425 13:17:41.644278   23548 start.go:83] releasing machines lock for "force-systemd-flag-359000", held for 6m3.839285011s
	W0425 13:17:41.644294   23548 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0425 13:17:41.644705   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:41.695394   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:41.695444   23548 delete.go:82] Unable to get host status for force-systemd-flag-359000, assuming it has already been deleted: state: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	W0425 13:17:41.695515   23548 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0425 13:17:41.695527   23548 start.go:728] Will try again in 5 seconds ...
	I0425 13:17:46.697418   23548 start.go:360] acquireMachinesLock for force-systemd-flag-359000: {Name:mke352dd264bd381f827c10e02c8f73351b9810c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 13:17:46.698283   23548 start.go:364] duration metric: took 800.546µs to acquireMachinesLock for "force-systemd-flag-359000"
	I0425 13:17:46.698439   23548 start.go:96] Skipping create...Using existing machine configuration
	I0425 13:17:46.698458   23548 fix.go:54] fixHost starting: 
	I0425 13:17:46.699004   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:46.752385   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:46.752437   23548 fix.go:112] recreateIfNeeded on force-systemd-flag-359000: state= err=unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:46.752458   23548 fix.go:117] machineExists: false. err=machine does not exist
	I0425 13:17:46.774235   23548 out.go:177] * docker "force-systemd-flag-359000" container is missing, will recreate.
	I0425 13:17:46.795623   23548 delete.go:124] DEMOLISHING force-systemd-flag-359000 ...
	I0425 13:17:46.795756   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:46.844345   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	W0425 13:17:46.844404   23548 stop.go:83] unable to get state: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:46.844427   23548 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:46.844790   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:46.892234   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:46.892306   23548 delete.go:82] Unable to get host status for force-systemd-flag-359000, assuming it has already been deleted: state: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:46.892392   23548 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-359000
	W0425 13:17:46.940110   23548 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-359000 returned with exit code 1
	I0425 13:17:46.940146   23548 kic.go:371] could not find the container force-systemd-flag-359000 to remove it. will try anyways
	I0425 13:17:46.940215   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:46.987988   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	W0425 13:17:46.988044   23548 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:46.988120   23548 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-359000 /bin/bash -c "sudo init 0"
	W0425 13:17:47.037767   23548 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-359000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0425 13:17:47.037796   23548 oci.go:650] error shutdown force-systemd-flag-359000: docker exec --privileged -t force-systemd-flag-359000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:48.038167   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:48.089720   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:48.089768   23548 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:48.089782   23548 oci.go:664] temporary error: container force-systemd-flag-359000 status is  but expect it to be exited
	I0425 13:17:48.089818   23548 retry.go:31] will retry after 494.494509ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:48.586223   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:48.637136   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:48.637189   23548 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:48.637199   23548 oci.go:664] temporary error: container force-systemd-flag-359000 status is  but expect it to be exited
	I0425 13:17:48.637220   23548 retry.go:31] will retry after 476.175921ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:49.115733   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:49.169160   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:49.169211   23548 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:49.169224   23548 oci.go:664] temporary error: container force-systemd-flag-359000 status is  but expect it to be exited
	I0425 13:17:49.169253   23548 retry.go:31] will retry after 1.093616248s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:50.263939   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:50.313509   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:50.313555   23548 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:50.313568   23548 oci.go:664] temporary error: container force-systemd-flag-359000 status is  but expect it to be exited
	I0425 13:17:50.313593   23548 retry.go:31] will retry after 2.461614944s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:52.777569   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:52.826224   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:52.826273   23548 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:52.826288   23548 oci.go:664] temporary error: container force-systemd-flag-359000 status is  but expect it to be exited
	I0425 13:17:52.826316   23548 retry.go:31] will retry after 3.442506122s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:56.271185   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:56.321122   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:56.321174   23548 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:56.321182   23548 oci.go:664] temporary error: container force-systemd-flag-359000 status is  but expect it to be exited
	I0425 13:17:56.321206   23548 retry.go:31] will retry after 3.493551963s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:59.817133   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:17:59.869846   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:17:59.869896   23548 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:17:59.869910   23548 oci.go:664] temporary error: container force-systemd-flag-359000 status is  but expect it to be exited
	I0425 13:17:59.869936   23548 retry.go:31] will retry after 6.906361432s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:18:06.776989   23548 cli_runner.go:164] Run: docker container inspect force-systemd-flag-359000 --format={{.State.Status}}
	W0425 13:18:06.826682   23548 cli_runner.go:211] docker container inspect force-systemd-flag-359000 --format={{.State.Status}} returned with exit code 1
	I0425 13:18:06.826733   23548 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:18:06.826745   23548 oci.go:664] temporary error: container force-systemd-flag-359000 status is  but expect it to be exited
	I0425 13:18:06.826775   23548 oci.go:88] couldn't shut down force-systemd-flag-359000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	 
	I0425 13:18:06.826849   23548 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-359000
	I0425 13:18:06.877346   23548 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-359000
	W0425 13:18:06.925981   23548 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-359000 returned with exit code 1
	I0425 13:18:06.926089   23548 cli_runner.go:164] Run: docker network inspect force-systemd-flag-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:18:06.974239   23548 cli_runner.go:164] Run: docker network rm force-systemd-flag-359000
	I0425 13:18:07.086577   23548 fix.go:124] Sleeping 1 second for extra luck!
	I0425 13:18:08.087011   23548 start.go:125] createHost starting for "" (driver="docker")
	I0425 13:18:08.107918   23548 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0425 13:18:08.108092   23548 start.go:159] libmachine.API.Create for "force-systemd-flag-359000" (driver="docker")
	I0425 13:18:08.108120   23548 client.go:168] LocalClient.Create starting
	I0425 13:18:08.108369   23548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 13:18:08.108475   23548 main.go:141] libmachine: Decoding PEM data...
	I0425 13:18:08.108499   23548 main.go:141] libmachine: Parsing certificate...
	I0425 13:18:08.108578   23548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 13:18:08.108652   23548 main.go:141] libmachine: Decoding PEM data...
	I0425 13:18:08.108666   23548 main.go:141] libmachine: Parsing certificate...
	I0425 13:18:08.128905   23548 cli_runner.go:164] Run: docker network inspect force-systemd-flag-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 13:18:08.179727   23548 cli_runner.go:211] docker network inspect force-systemd-flag-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 13:18:08.179818   23548 network_create.go:281] running [docker network inspect force-systemd-flag-359000] to gather additional debugging logs...
	I0425 13:18:08.179835   23548 cli_runner.go:164] Run: docker network inspect force-systemd-flag-359000
	W0425 13:18:08.227587   23548 cli_runner.go:211] docker network inspect force-systemd-flag-359000 returned with exit code 1
	I0425 13:18:08.227615   23548 network_create.go:284] error running [docker network inspect force-systemd-flag-359000]: docker network inspect force-systemd-flag-359000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-359000 not found
	I0425 13:18:08.227632   23548 network_create.go:286] output of [docker network inspect force-systemd-flag-359000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-359000 not found
	
	** /stderr **
	I0425 13:18:08.227788   23548 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:18:08.277885   23548 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:08.279471   23548 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:08.281108   23548 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:08.282780   23548 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:08.284508   23548 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:18:08.285155   23548 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00230d2b0}
	I0425 13:18:08.285177   23548 network_create.go:124] attempt to create docker network force-systemd-flag-359000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0425 13:18:08.285298   23548 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-359000 force-systemd-flag-359000
	I0425 13:18:08.369412   23548 network_create.go:108] docker network force-systemd-flag-359000 192.168.94.0/24 created
	I0425 13:18:08.369450   23548 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-359000" container
	I0425 13:18:08.369562   23548 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 13:18:08.420204   23548 cli_runner.go:164] Run: docker volume create force-systemd-flag-359000 --label name.minikube.sigs.k8s.io=force-systemd-flag-359000 --label created_by.minikube.sigs.k8s.io=true
	I0425 13:18:08.467419   23548 oci.go:103] Successfully created a docker volume force-systemd-flag-359000
	I0425 13:18:08.467557   23548 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-359000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-359000 --entrypoint /usr/bin/test -v force-systemd-flag-359000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 13:18:08.717770   23548 oci.go:107] Successfully prepared a docker volume force-systemd-flag-359000
	I0425 13:18:08.717817   23548 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 13:18:08.717833   23548 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 13:18:08.717966   23548 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-359000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 13:24:08.108860   23548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:24:08.108994   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:08.161050   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:08.161166   23548 retry.go:31] will retry after 266.206109ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:08.429430   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:08.480409   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:08.480516   23548 retry.go:31] will retry after 393.652302ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:08.876608   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:08.927831   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:08.927945   23548 retry.go:31] will retry after 374.949005ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:09.303443   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:09.354579   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	W0425 13:24:09.354686   23548 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	
	W0425 13:24:09.354705   23548 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:09.354778   23548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:24:09.354836   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:09.403368   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:09.403480   23548 retry.go:31] will retry after 287.041769ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:09.691256   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:09.743982   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:09.744081   23548 retry.go:31] will retry after 549.383455ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:10.295886   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:10.346280   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:10.346380   23548 retry.go:31] will retry after 685.008005ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:11.033794   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:11.085253   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	W0425 13:24:11.085366   23548 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	
	W0425 13:24:11.085383   23548 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:11.085392   23548 start.go:128] duration metric: took 6m2.998772096s to createHost
	I0425 13:24:11.085455   23548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:24:11.085512   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:11.133429   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:11.133523   23548 retry.go:31] will retry after 203.366592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:11.337278   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:11.388197   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:11.388293   23548 retry.go:31] will retry after 283.378672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:11.672804   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:11.724339   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:11.724429   23548 retry.go:31] will retry after 505.179218ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:12.231018   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:12.282494   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	W0425 13:24:12.282602   23548 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	
	W0425 13:24:12.282620   23548 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:12.282672   23548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:24:12.282725   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:12.329978   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:12.330074   23548 retry.go:31] will retry after 248.296047ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:12.580657   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:12.632150   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:12.632250   23548 retry.go:31] will retry after 338.716989ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:12.971632   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:13.023859   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	I0425 13:24:13.023968   23548 retry.go:31] will retry after 770.306704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:13.795787   23548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000
	W0425 13:24:13.847074   23548 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000 returned with exit code 1
	W0425 13:24:13.847202   23548 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	
	W0425 13:24:13.847219   23548 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-359000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-359000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	I0425 13:24:13.847225   23548 fix.go:56] duration metric: took 6m27.149252249s for fixHost
	I0425 13:24:13.847234   23548 start.go:83] releasing machines lock for "force-systemd-flag-359000", held for 6m27.149318303s
	W0425 13:24:13.847315   23548 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-359000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-359000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0425 13:24:13.890418   23548 out.go:177] 
	W0425 13:24:13.911535   23548 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0425 13:24:13.911600   23548 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0425 13:24:13.911714   23548 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0425 13:24:13.953572   23548 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-359000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-359000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-359000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (200.23441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-359000 host status: state: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-359000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-25 13:24:14.229834 -0700 PDT m=+6803.157470612
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-359000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-359000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-359000",
	        "Id": "01f6c806938405810210cfc98fd002039489078edf5b59329bfd29ae3d73b1f9",
	        "Created": "2024-04-25T20:18:08.330735527Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-359000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-359000 -n force-systemd-flag-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-359000 -n force-systemd-flag-359000: exit status 7 (112.7179ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 13:24:14.395622   24286 status.go:249] status error: host: state: unknown state "force-systemd-flag-359000": docker container inspect force-systemd-flag-359000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-359000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-359000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-359000
--- FAIL: TestForceSystemdFlag (758.17s)

                                                
                                    
x
+
TestForceSystemdEnv (754.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-593000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0425 13:00:09.073690    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 13:02:06.072228    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 13:04:03.018155    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 13:05:09.073014    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 13:08:12.133912    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 13:09:03.018830    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 13:10:09.074222    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-593000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m33.457289461s)

                                                
                                                
-- stdout --
	* [force-systemd-env-593000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-593000" primary control-plane node in "force-systemd-env-593000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-593000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:59:37.850803   22886 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:59:37.851059   22886 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:59:37.851064   22886 out.go:304] Setting ErrFile to fd 2...
	I0425 12:59:37.851068   22886 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:59:37.851250   22886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:59:37.852715   22886 out.go:298] Setting JSON to false
	I0425 12:59:37.874960   22886 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":12548,"bootTime":1714062629,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 12:59:37.875049   22886 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 12:59:37.897364   22886 out.go:177] * [force-systemd-env-593000] minikube v1.33.0 on Darwin 14.4.1
	I0425 12:59:37.917339   22886 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 12:59:37.917370   22886 notify.go:220] Checking for updates...
	I0425 12:59:37.959228   22886 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 12:59:37.980302   22886 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 12:59:38.000975   22886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 12:59:38.022479   22886 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 12:59:38.043205   22886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0425 12:59:38.065114   22886 config.go:182] Loaded profile config "offline-docker-438000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:59:38.065266   22886 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 12:59:38.120063   22886 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 12:59:38.120215   22886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:59:38.228925   22886 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-25 19:59:38.218014645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:59:38.271191   22886 out.go:177] * Using the docker driver based on user configuration
	I0425 12:59:38.294075   22886 start.go:297] selected driver: docker
	I0425 12:59:38.294108   22886 start.go:901] validating driver "docker" against <nil>
	I0425 12:59:38.294122   22886 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 12:59:38.298527   22886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:59:38.408073   22886 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-25 19:59:38.397494242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:59:38.408254   22886 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 12:59:38.408451   22886 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 12:59:38.430036   22886 out.go:177] * Using Docker Desktop driver with root privileges
	I0425 12:59:38.451000   22886 cni.go:84] Creating CNI manager for ""
	I0425 12:59:38.451046   22886 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0425 12:59:38.451064   22886 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 12:59:38.451165   22886 start.go:340] cluster config:
	{Name:force-systemd-env-593000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:59:38.472729   22886 out.go:177] * Starting "force-systemd-env-593000" primary control-plane node in "force-systemd-env-593000" cluster
	I0425 12:59:38.515662   22886 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 12:59:38.536911   22886 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0425 12:59:38.578667   22886 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:59:38.578723   22886 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 12:59:38.578745   22886 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:59:38.578775   22886 cache.go:56] Caching tarball of preloaded images
	I0425 12:59:38.578995   22886 preload.go:173] Found /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:59:38.579017   22886 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:59:38.579176   22886 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/force-systemd-env-593000/config.json ...
	I0425 12:59:38.579917   22886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/force-systemd-env-593000/config.json: {Name:mkfab60d4dbcc987762df449b12ba9bc43535e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:59:38.631384   22886 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0425 12:59:38.631411   22886 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0425 12:59:38.631428   22886 cache.go:194] Successfully downloaded all kic artifacts
	I0425 12:59:38.631464   22886 start.go:360] acquireMachinesLock for force-systemd-env-593000: {Name:mk5a0c9674358119995d58671cc762d4e1804129 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:59:38.631627   22886 start.go:364] duration metric: took 151.605µs to acquireMachinesLock for "force-systemd-env-593000"
	I0425 12:59:38.631656   22886 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-593000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-593000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0425 12:59:38.631730   22886 start.go:125] createHost starting for "" (driver="docker")
	I0425 12:59:38.673899   22886 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0425 12:59:38.674252   22886 start.go:159] libmachine.API.Create for "force-systemd-env-593000" (driver="docker")
	I0425 12:59:38.674301   22886 client.go:168] LocalClient.Create starting
	I0425 12:59:38.674548   22886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 12:59:38.674643   22886 main.go:141] libmachine: Decoding PEM data...
	I0425 12:59:38.674681   22886 main.go:141] libmachine: Parsing certificate...
	I0425 12:59:38.674775   22886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 12:59:38.674871   22886 main.go:141] libmachine: Decoding PEM data...
	I0425 12:59:38.674892   22886 main.go:141] libmachine: Parsing certificate...
	I0425 12:59:38.675825   22886 cli_runner.go:164] Run: docker network inspect force-systemd-env-593000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 12:59:38.724373   22886 cli_runner.go:211] docker network inspect force-systemd-env-593000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 12:59:38.724487   22886 network_create.go:281] running [docker network inspect force-systemd-env-593000] to gather additional debugging logs...
	I0425 12:59:38.724509   22886 cli_runner.go:164] Run: docker network inspect force-systemd-env-593000
	W0425 12:59:38.772989   22886 cli_runner.go:211] docker network inspect force-systemd-env-593000 returned with exit code 1
	I0425 12:59:38.773015   22886 network_create.go:284] error running [docker network inspect force-systemd-env-593000]: docker network inspect force-systemd-env-593000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-593000 not found
	I0425 12:59:38.773030   22886 network_create.go:286] output of [docker network inspect force-systemd-env-593000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-593000 not found
	
	** /stderr **
	I0425 12:59:38.773163   22886 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:59:38.822750   22886 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:59:38.824134   22886 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:59:38.825756   22886 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:59:38.826107   22886 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00229cce0}
	I0425 12:59:38.826123   22886 network_create.go:124] attempt to create docker network force-systemd-env-593000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0425 12:59:38.826197   22886 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-593000 force-systemd-env-593000
	W0425 12:59:38.874852   22886 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-593000 force-systemd-env-593000 returned with exit code 1
	W0425 12:59:38.874887   22886 network_create.go:149] failed to create docker network force-systemd-env-593000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-593000 force-systemd-env-593000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0425 12:59:38.874908   22886 network_create.go:116] failed to create docker network force-systemd-env-593000 192.168.76.0/24, will retry: subnet is taken
	I0425 12:59:38.876464   22886 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:59:38.876830   22886 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a9720}
	I0425 12:59:38.876843   22886 network_create.go:124] attempt to create docker network force-systemd-env-593000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0425 12:59:38.876911   22886 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-593000 force-systemd-env-593000
	I0425 12:59:38.961591   22886 network_create.go:108] docker network force-systemd-env-593000 192.168.85.0/24 created
	I0425 12:59:38.961633   22886 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-593000" container
	I0425 12:59:38.961749   22886 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 12:59:39.011787   22886 cli_runner.go:164] Run: docker volume create force-systemd-env-593000 --label name.minikube.sigs.k8s.io=force-systemd-env-593000 --label created_by.minikube.sigs.k8s.io=true
	I0425 12:59:39.060979   22886 oci.go:103] Successfully created a docker volume force-systemd-env-593000
	I0425 12:59:39.061125   22886 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-593000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-593000 --entrypoint /usr/bin/test -v force-systemd-env-593000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 12:59:39.369477   22886 oci.go:107] Successfully prepared a docker volume force-systemd-env-593000
	I0425 12:59:39.369517   22886 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:59:39.369530   22886 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 12:59:39.369633   22886 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-593000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 13:05:38.677625   22886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:05:38.677795   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:38.729140   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:05:38.729245   22886 retry.go:31] will retry after 372.138912ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:39.103724   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:39.155407   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:05:39.155521   22886 retry.go:31] will retry after 502.410197ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:39.659791   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:39.709916   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:05:39.710009   22886 retry.go:31] will retry after 572.24276ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:40.282960   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:40.333755   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	W0425 13:05:40.333879   22886 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	
	W0425 13:05:40.333909   22886 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:40.333978   22886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:05:40.334051   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:40.382362   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:05:40.382471   22886 retry.go:31] will retry after 204.017268ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:40.587044   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:40.639935   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:05:40.640030   22886 retry.go:31] will retry after 371.121204ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:41.012549   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:41.065457   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:05:41.065546   22886 retry.go:31] will retry after 332.126462ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:41.400087   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:41.452451   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:05:41.452558   22886 retry.go:31] will retry after 472.906267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:41.926953   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:05:41.978468   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	W0425 13:05:41.978571   22886 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	
	W0425 13:05:41.978587   22886 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:41.978606   22886 start.go:128] duration metric: took 6m3.346356427s to createHost
	I0425 13:05:41.978614   22886 start.go:83] releasing machines lock for "force-systemd-env-593000", held for 6m3.346471927s
	W0425 13:05:41.978630   22886 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0425 13:05:41.979063   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:42.029948   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:42.030002   22886 delete.go:82] Unable to get host status for force-systemd-env-593000, assuming it has already been deleted: state: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	W0425 13:05:42.030079   22886 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0425 13:05:42.030091   22886 start.go:728] Will try again in 5 seconds ...
	I0425 13:05:47.030498   22886 start.go:360] acquireMachinesLock for force-systemd-env-593000: {Name:mk5a0c9674358119995d58671cc762d4e1804129 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 13:05:47.030714   22886 start.go:364] duration metric: took 169.408µs to acquireMachinesLock for "force-systemd-env-593000"
	I0425 13:05:47.030748   22886 start.go:96] Skipping create...Using existing machine configuration
	I0425 13:05:47.030766   22886 fix.go:54] fixHost starting: 
	I0425 13:05:47.031198   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:47.081932   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:47.081978   22886 fix.go:112] recreateIfNeeded on force-systemd-env-593000: state= err=unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:47.082004   22886 fix.go:117] machineExists: false. err=machine does not exist
	I0425 13:05:47.104006   22886 out.go:177] * docker "force-systemd-env-593000" container is missing, will recreate.
	I0425 13:05:47.146740   22886 delete.go:124] DEMOLISHING force-systemd-env-593000 ...
	I0425 13:05:47.146977   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:47.196396   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	W0425 13:05:47.196447   22886 stop.go:83] unable to get state: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:47.196467   22886 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:47.196839   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:47.243916   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:47.243971   22886 delete.go:82] Unable to get host status for force-systemd-env-593000, assuming it has already been deleted: state: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:47.244072   22886 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-593000
	W0425 13:05:47.291539   22886 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-593000 returned with exit code 1
	I0425 13:05:47.291572   22886 kic.go:371] could not find the container force-systemd-env-593000 to remove it. will try anyways
	I0425 13:05:47.291654   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:47.339712   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	W0425 13:05:47.339763   22886 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:47.339844   22886 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-593000 /bin/bash -c "sudo init 0"
	W0425 13:05:47.387640   22886 cli_runner.go:211] docker exec --privileged -t force-systemd-env-593000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0425 13:05:47.387673   22886 oci.go:650] error shutdown force-systemd-env-593000: docker exec --privileged -t force-systemd-env-593000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:48.390143   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:48.442074   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:48.442123   22886 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:48.442136   22886 oci.go:664] temporary error: container force-systemd-env-593000 status is  but expect it to be exited
	I0425 13:05:48.442162   22886 retry.go:31] will retry after 656.555311ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:49.101091   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:49.153356   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:49.153406   22886 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:49.153421   22886 oci.go:664] temporary error: container force-systemd-env-593000 status is  but expect it to be exited
	I0425 13:05:49.153446   22886 retry.go:31] will retry after 440.512919ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:49.595920   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:49.647039   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:49.647093   22886 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:49.647104   22886 oci.go:664] temporary error: container force-systemd-env-593000 status is  but expect it to be exited
	I0425 13:05:49.647129   22886 retry.go:31] will retry after 1.144416133s: couldn't verify container is exited. %v: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:50.794006   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:50.845935   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:50.845979   22886 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:50.845991   22886 oci.go:664] temporary error: container force-systemd-env-593000 status is  but expect it to be exited
	I0425 13:05:50.846016   22886 retry.go:31] will retry after 2.443002135s: couldn't verify container is exited. %v: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:53.291402   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:53.342631   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:53.342683   22886 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:53.342695   22886 oci.go:664] temporary error: container force-systemd-env-593000 status is  but expect it to be exited
	I0425 13:05:53.342721   22886 retry.go:31] will retry after 2.524369366s: couldn't verify container is exited. %v: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:55.868009   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:55.921348   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:55.921402   22886 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:55.921414   22886 oci.go:664] temporary error: container force-systemd-env-593000 status is  but expect it to be exited
	I0425 13:05:55.921448   22886 retry.go:31] will retry after 3.7122155s: couldn't verify container is exited. %v: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:59.634474   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:05:59.686781   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:05:59.686838   22886 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:05:59.686848   22886 oci.go:664] temporary error: container force-systemd-env-593000 status is  but expect it to be exited
	I0425 13:05:59.686874   22886 retry.go:31] will retry after 4.716282273s: couldn't verify container is exited. %v: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:06:04.405595   22886 cli_runner.go:164] Run: docker container inspect force-systemd-env-593000 --format={{.State.Status}}
	W0425 13:06:04.456906   22886 cli_runner.go:211] docker container inspect force-systemd-env-593000 --format={{.State.Status}} returned with exit code 1
	I0425 13:06:04.456959   22886 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:06:04.456970   22886 oci.go:664] temporary error: container force-systemd-env-593000 status is  but expect it to be exited
	I0425 13:06:04.457001   22886 oci.go:88] couldn't shut down force-systemd-env-593000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	 
	I0425 13:06:04.457071   22886 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-593000
	I0425 13:06:04.505862   22886 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-593000
	W0425 13:06:04.553488   22886 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-593000 returned with exit code 1
	I0425 13:06:04.553609   22886 cli_runner.go:164] Run: docker network inspect force-systemd-env-593000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:06:04.601850   22886 cli_runner.go:164] Run: docker network rm force-systemd-env-593000
	I0425 13:06:04.710966   22886 fix.go:124] Sleeping 1 second for extra luck!
	I0425 13:06:05.712338   22886 start.go:125] createHost starting for "" (driver="docker")
	I0425 13:06:05.733978   22886 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0425 13:06:05.734164   22886 start.go:159] libmachine.API.Create for "force-systemd-env-593000" (driver="docker")
	I0425 13:06:05.734192   22886 client.go:168] LocalClient.Create starting
	I0425 13:06:05.734425   22886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 13:06:05.734528   22886 main.go:141] libmachine: Decoding PEM data...
	I0425 13:06:05.734553   22886 main.go:141] libmachine: Parsing certificate...
	I0425 13:06:05.734632   22886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 13:06:05.734707   22886 main.go:141] libmachine: Decoding PEM data...
	I0425 13:06:05.734721   22886 main.go:141] libmachine: Parsing certificate...
	I0425 13:06:05.735437   22886 cli_runner.go:164] Run: docker network inspect force-systemd-env-593000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 13:06:05.786773   22886 cli_runner.go:211] docker network inspect force-systemd-env-593000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 13:06:05.786867   22886 network_create.go:281] running [docker network inspect force-systemd-env-593000] to gather additional debugging logs...
	I0425 13:06:05.786889   22886 cli_runner.go:164] Run: docker network inspect force-systemd-env-593000
	W0425 13:06:05.837055   22886 cli_runner.go:211] docker network inspect force-systemd-env-593000 returned with exit code 1
	I0425 13:06:05.837089   22886 network_create.go:284] error running [docker network inspect force-systemd-env-593000]: docker network inspect force-systemd-env-593000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-593000 not found
	I0425 13:06:05.837103   22886 network_create.go:286] output of [docker network inspect force-systemd-env-593000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-593000 not found
	
	** /stderr **
	I0425 13:06:05.837235   22886 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 13:06:05.887734   22886 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:06:05.889254   22886 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:06:05.890591   22886 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:06:05.892115   22886 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:06:05.893539   22886 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:06:05.895122   22886 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 13:06:05.895509   22886 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a9c50}
	I0425 13:06:05.895522   22886 network_create.go:124] attempt to create docker network force-systemd-env-593000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0425 13:06:05.895593   22886 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-593000 force-systemd-env-593000
	I0425 13:06:05.978901   22886 network_create.go:108] docker network force-systemd-env-593000 192.168.103.0/24 created
	I0425 13:06:05.979055   22886 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-593000" container
	I0425 13:06:05.979159   22886 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 13:06:06.028786   22886 cli_runner.go:164] Run: docker volume create force-systemd-env-593000 --label name.minikube.sigs.k8s.io=force-systemd-env-593000 --label created_by.minikube.sigs.k8s.io=true
	I0425 13:06:06.077384   22886 oci.go:103] Successfully created a docker volume force-systemd-env-593000
	I0425 13:06:06.077503   22886 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-593000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-593000 --entrypoint /usr/bin/test -v force-systemd-env-593000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 13:06:06.328563   22886 oci.go:107] Successfully prepared a docker volume force-systemd-env-593000
	I0425 13:06:06.328593   22886 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 13:06:06.328607   22886 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 13:06:06.328714   22886 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-593000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 13:12:05.736040   22886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:12:05.736166   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:05.790452   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:05.790584   22886 retry.go:31] will retry after 353.221911ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:06.144848   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:06.197290   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:06.197388   22886 retry.go:31] will retry after 281.00851ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:06.480833   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:06.533175   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:06.533274   22886 retry.go:31] will retry after 410.642574ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:06.946298   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:06.998837   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	W0425 13:12:06.998940   22886 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	
	W0425 13:12:06.998960   22886 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:06.999027   22886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:12:06.999092   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:07.049903   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:07.049998   22886 retry.go:31] will retry after 268.975819ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:07.321390   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:07.371402   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:07.371519   22886 retry.go:31] will retry after 397.451793ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:07.770630   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:07.821108   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:07.821210   22886 retry.go:31] will retry after 605.85853ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:08.428132   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:08.480701   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	W0425 13:12:08.480821   22886 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	
	W0425 13:12:08.480838   22886 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:08.480846   22886 start.go:128] duration metric: took 6m2.767960521s to createHost
	I0425 13:12:08.480920   22886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 13:12:08.480981   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:08.531029   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:08.531122   22886 retry.go:31] will retry after 283.261327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:08.815122   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:08.866816   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:08.866916   22886 retry.go:31] will retry after 318.629043ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:09.187898   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:09.238854   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:09.238948   22886 retry.go:31] will retry after 329.881546ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:09.569981   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:09.623405   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	W0425 13:12:09.623518   22886 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	
	W0425 13:12:09.623534   22886 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:09.623604   22886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 13:12:09.623659   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:09.671928   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:09.672023   22886 retry.go:31] will retry after 146.488787ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:09.818968   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:09.868671   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:09.868765   22886 retry.go:31] will retry after 218.656427ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:10.088790   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:10.141789   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:10.141878   22886 retry.go:31] will retry after 396.415746ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:10.539660   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:10.591927   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	I0425 13:12:10.592039   22886 retry.go:31] will retry after 455.700075ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:11.050145   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000
	W0425 13:12:11.102248   22886 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000 returned with exit code 1
	W0425 13:12:11.102349   22886 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	
	W0425 13:12:11.102367   22886 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-593000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-593000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	I0425 13:12:11.102379   22886 fix.go:56] duration metric: took 6m24.071080842s for fixHost
	I0425 13:12:11.102387   22886 start.go:83] releasing machines lock for "force-systemd-env-593000", held for 6m24.071124293s
	W0425 13:12:11.102469   22886 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-593000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-593000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0425 13:12:11.144770   22886 out.go:177] 
	W0425 13:12:11.165890   22886 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0425 13:12:11.165951   22886 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0425 13:12:11.165979   22886 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0425 13:12:11.186659   22886 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-593000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-593000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-593000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (198.430388ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-593000 host status: state: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-593000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-25 13:12:11.460133 -0700 PDT m=+6080.432504759
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-593000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-593000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-593000",
	        "Id": "80c77eb027fc4a7ac4024aaf73ed5464d68292539edc4354e757d9b22cabc13c",
	        "Created": "2024-04-25T20:06:05.940175896Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-593000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-593000 -n force-systemd-env-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-593000 -n force-systemd-env-593000: exit status 7 (113.147212ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 13:12:11.623214   23654 status.go:249] status error: host: state: unknown state "force-systemd-env-593000": docker container inspect force-systemd-env-593000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-593000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-593000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-593000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-593000
--- FAIL: TestForceSystemdEnv (754.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (873.11s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-636000 ssh -- ls /minikube-host
E0425 11:59:02.892174    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:00:08.969728    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:01:32.027922    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:04:02.919244    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:05:08.976691    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:09:02.923290    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:10:08.979431    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-636000 ssh -- ls /minikube-host: signal: killed (14m32.685001744s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-636000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-636000
helpers_test.go:235: (dbg) docker inspect mount-start-2-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48725c04769130b0f09c6dd9f5646519c1e3c11d531869907d9c42c33f2b86c6",
	        "Created": "2024-04-25T18:56:05.803584119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 122297,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-25T18:56:16.866685891Z",
	            "FinishedAt": "2024-04-25T18:56:14.581401027Z"
	        },
	        "Image": "sha256:7c2e7b1115438f0e876ee0c793febc72a876a26c7b12b8e5475b223c894686c4",
	        "ResolvConfPath": "/var/lib/docker/containers/48725c04769130b0f09c6dd9f5646519c1e3c11d531869907d9c42c33f2b86c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48725c04769130b0f09c6dd9f5646519c1e3c11d531869907d9c42c33f2b86c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/48725c04769130b0f09c6dd9f5646519c1e3c11d531869907d9c42c33f2b86c6/hosts",
	        "LogPath": "/var/lib/docker/containers/48725c04769130b0f09c6dd9f5646519c1e3c11d531869907d9c42c33f2b86c6/48725c04769130b0f09c6dd9f5646519c1e3c11d531869907d9c42c33f2b86c6-json.log",
	        "Name": "/mount-start-2-636000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/Users:/minikube-host",
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-636000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-636000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dd509ab04b832632030595aaeaff22438a1bb072e032da88dd8d9b3779adb4e4-init/diff:/var/lib/docker/overlay2/b36b46247a77cf9e6a819b1e012f26aa551db55e7bf1042576a1dd003188df4b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd509ab04b832632030595aaeaff22438a1bb072e032da88dd8d9b3779adb4e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd509ab04b832632030595aaeaff22438a1bb072e032da88dd8d9b3779adb4e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd509ab04b832632030595aaeaff22438a1bb072e032da88dd8d9b3779adb4e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-636000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-636000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-636000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-636000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-636000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74f4591780c9b8599ca43d9babc97de8c792d043f0172196b2155d104bfa6f12",
	            "SandboxKey": "/var/run/docker/netns/74f4591780c9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59961"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59962"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59963"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59964"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59965"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-636000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "abbf97e90f1e4194211b15563f85e22ea389af4a40395ca8d154cf72d2a21294",
	                    "EndpointID": "d0450f711e91c479057ed3fa07300307b46a7f832db879c0c2b316d29545d7c6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-636000",
	                        "48725c047691"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-636000 -n mount-start-2-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-636000 -n mount-start-2-636000: exit status 6 (373.418852ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:10:57.081311   20109 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-636000" does not appear in /Users/jenkins/minikube-integration/18757-9222/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-636000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (873.11s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (754.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-948000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0425 12:14:02.925720    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:15:08.986057    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:18:12.041372    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:19:02.931851    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:20:08.985856    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:24:02.932492    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-948000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m33.984767421s)

                                                
                                                
-- stdout --
	* [multinode-948000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-948000" primary control-plane node in "multinode-948000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-948000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:12:06.114407   20272 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:12:06.114687   20272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:12:06.114692   20272 out.go:304] Setting ErrFile to fd 2...
	I0425 12:12:06.114696   20272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:12:06.114868   20272 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:12:06.116518   20272 out.go:298] Setting JSON to false
	I0425 12:12:06.139267   20272 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9697,"bootTime":1714062629,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 12:12:06.139393   20272 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 12:12:06.161350   20272 out.go:177] * [multinode-948000] minikube v1.33.0 on Darwin 14.4.1
	I0425 12:12:06.182308   20272 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 12:12:06.182313   20272 notify.go:220] Checking for updates...
	I0425 12:12:06.204142   20272 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 12:12:06.227068   20272 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 12:12:06.248279   20272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 12:12:06.269046   20272 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 12:12:06.290156   20272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 12:12:06.311536   20272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 12:12:06.365977   20272 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 12:12:06.366150   20272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:12:06.475714   20272 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-25 19:12:06.465307997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:12:06.517816   20272 out.go:177] * Using the docker driver based on user configuration
	I0425 12:12:06.538816   20272 start.go:297] selected driver: docker
	I0425 12:12:06.538853   20272 start.go:901] validating driver "docker" against <nil>
	I0425 12:12:06.538868   20272 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 12:12:06.543739   20272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:12:06.651809   20272 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-25 19:12:06.641638286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:12:06.652015   20272 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 12:12:06.652199   20272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:12:06.673587   20272 out.go:177] * Using Docker Desktop driver with root privileges
	I0425 12:12:06.694758   20272 cni.go:84] Creating CNI manager for ""
	I0425 12:12:06.694791   20272 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0425 12:12:06.694803   20272 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0425 12:12:06.694937   20272 start.go:340] cluster config:
	{Name:multinode-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:12:06.716664   20272 out.go:177] * Starting "multinode-948000" primary control-plane node in "multinode-948000" cluster
	I0425 12:12:06.758705   20272 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 12:12:06.779844   20272 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0425 12:12:06.821766   20272 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:12:06.821837   20272 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:12:06.821853   20272 cache.go:56] Caching tarball of preloaded images
	I0425 12:12:06.821865   20272 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 12:12:06.822106   20272 preload.go:173] Found /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:12:06.822128   20272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:12:06.823711   20272 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/multinode-948000/config.json ...
	I0425 12:12:06.823849   20272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/multinode-948000/config.json: {Name:mke5c367eeea78e005be165d4fce150b0041c31b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:12:06.872994   20272 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0425 12:12:06.873042   20272 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0425 12:12:06.873062   20272 cache.go:194] Successfully downloaded all kic artifacts
	I0425 12:12:06.873110   20272 start.go:360] acquireMachinesLock for multinode-948000: {Name:mkc22316bab7a305bfcfe18e5a80258ef7beb819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:12:06.873618   20272 start.go:364] duration metric: took 494.837µs to acquireMachinesLock for "multinode-948000"
	I0425 12:12:06.873651   20272 start.go:93] Provisioning new machine with config: &{Name:multinode-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-948000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0425 12:12:06.873727   20272 start.go:125] createHost starting for "" (driver="docker")
	I0425 12:12:06.915709   20272 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0425 12:12:06.916097   20272 start.go:159] libmachine.API.Create for "multinode-948000" (driver="docker")
	I0425 12:12:06.916159   20272 client.go:168] LocalClient.Create starting
	I0425 12:12:06.916389   20272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 12:12:06.916493   20272 main.go:141] libmachine: Decoding PEM data...
	I0425 12:12:06.916528   20272 main.go:141] libmachine: Parsing certificate...
	I0425 12:12:06.916634   20272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 12:12:06.916708   20272 main.go:141] libmachine: Decoding PEM data...
	I0425 12:12:06.916724   20272 main.go:141] libmachine: Parsing certificate...
	I0425 12:12:06.917610   20272 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 12:12:06.966557   20272 cli_runner.go:211] docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 12:12:06.966660   20272 network_create.go:281] running [docker network inspect multinode-948000] to gather additional debugging logs...
	I0425 12:12:06.966675   20272 cli_runner.go:164] Run: docker network inspect multinode-948000
	W0425 12:12:07.014510   20272 cli_runner.go:211] docker network inspect multinode-948000 returned with exit code 1
	I0425 12:12:07.014534   20272 network_create.go:284] error running [docker network inspect multinode-948000]: docker network inspect multinode-948000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-948000 not found
	I0425 12:12:07.014557   20272 network_create.go:286] output of [docker network inspect multinode-948000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-948000 not found
	
	** /stderr **
	I0425 12:12:07.014682   20272 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:12:07.063804   20272 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:12:07.065434   20272 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:12:07.065770   20272 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f0010}
	I0425 12:12:07.065787   20272 network_create.go:124] attempt to create docker network multinode-948000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0425 12:12:07.065863   20272 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000
	I0425 12:12:07.151301   20272 network_create.go:108] docker network multinode-948000 192.168.67.0/24 created
	I0425 12:12:07.151343   20272 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-948000" container
	I0425 12:12:07.151438   20272 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 12:12:07.199909   20272 cli_runner.go:164] Run: docker volume create multinode-948000 --label name.minikube.sigs.k8s.io=multinode-948000 --label created_by.minikube.sigs.k8s.io=true
	I0425 12:12:07.249065   20272 oci.go:103] Successfully created a docker volume multinode-948000
	I0425 12:12:07.249174   20272 cli_runner.go:164] Run: docker run --rm --name multinode-948000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-948000 --entrypoint /usr/bin/test -v multinode-948000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 12:12:07.583232   20272 oci.go:107] Successfully prepared a docker volume multinode-948000
	I0425 12:12:07.583278   20272 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:12:07.583291   20272 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 12:12:07.583396   20272 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-948000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 12:18:06.924401   20272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:18:06.924542   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:06.975490   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:18:06.975615   20272 retry.go:31] will retry after 167.339791ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:07.143616   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:07.192578   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:18:07.192697   20272 retry.go:31] will retry after 323.15842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:07.516862   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:07.567817   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:18:07.567913   20272 retry.go:31] will retry after 483.864748ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:08.054071   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:08.106062   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:18:08.106171   20272 retry.go:31] will retry after 636.484342ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:08.745078   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:08.796493   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:18:08.796600   20272 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:18:08.796618   20272 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:08.796694   20272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 12:18:08.796744   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:08.847197   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:18:08.847287   20272 retry.go:31] will retry after 130.02365ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:08.979672   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:09.033386   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:18:09.033474   20272 retry.go:31] will retry after 215.272983ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:09.249997   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:09.299901   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:18:09.299999   20272 retry.go:31] will retry after 621.430986ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:09.922301   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:09.972583   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:18:09.972676   20272 retry.go:31] will retry after 523.375458ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:10.497945   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:18:10.549384   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:18:10.549485   20272 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:18:10.549502   20272 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:10.549518   20272 start.go:128] duration metric: took 6m3.66911262s to createHost
	I0425 12:18:10.549525   20272 start.go:83] releasing machines lock for "multinode-948000", held for 6m3.669233747s
	W0425 12:18:10.549542   20272 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0425 12:18:10.549967   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:10.597757   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:10.597813   20272 delete.go:82] Unable to get host status for multinode-948000, assuming it has already been deleted: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	W0425 12:18:10.597895   20272 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0425 12:18:10.597906   20272 start.go:728] Will try again in 5 seconds ...
	I0425 12:18:15.600135   20272 start.go:360] acquireMachinesLock for multinode-948000: {Name:mkc22316bab7a305bfcfe18e5a80258ef7beb819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:18:15.600978   20272 start.go:364] duration metric: took 776.017µs to acquireMachinesLock for "multinode-948000"
	I0425 12:18:15.601115   20272 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:18:15.601140   20272 fix.go:54] fixHost starting: 
	I0425 12:18:15.601660   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:15.653696   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:15.653739   20272 fix.go:112] recreateIfNeeded on multinode-948000: state= err=unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:15.653757   20272 fix.go:117] machineExists: false. err=machine does not exist
	I0425 12:18:15.696103   20272 out.go:177] * docker "multinode-948000" container is missing, will recreate.
	I0425 12:18:15.716935   20272 delete.go:124] DEMOLISHING multinode-948000 ...
	I0425 12:18:15.717144   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:15.765597   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	W0425 12:18:15.765667   20272 stop.go:83] unable to get state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:15.765690   20272 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:15.766083   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:15.814123   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:15.814172   20272 delete.go:82] Unable to get host status for multinode-948000, assuming it has already been deleted: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:15.814257   20272 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-948000
	W0425 12:18:15.862703   20272 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-948000 returned with exit code 1
	I0425 12:18:15.862746   20272 kic.go:371] could not find the container multinode-948000 to remove it. will try anyways
	I0425 12:18:15.862818   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:15.909994   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	W0425 12:18:15.910037   20272 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:15.910116   20272 cli_runner.go:164] Run: docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0"
	W0425 12:18:15.957989   20272 cli_runner.go:211] docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0425 12:18:15.958018   20272 oci.go:650] error shutdown multinode-948000: docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:16.960438   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:17.012555   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:17.012604   20272 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:17.012615   20272 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:18:17.012638   20272 retry.go:31] will retry after 280.358383ms: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:17.295328   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:17.344440   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:17.344486   20272 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:17.344496   20272 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:18:17.344521   20272 retry.go:31] will retry after 546.767858ms: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:17.893641   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:17.947521   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:17.947567   20272 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:17.947580   20272 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:18:17.947607   20272 retry.go:31] will retry after 1.649959648s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:19.598469   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:19.652937   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:19.652989   20272 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:19.652997   20272 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:18:19.653017   20272 retry.go:31] will retry after 1.457987646s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:21.112696   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:21.164905   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:21.164956   20272 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:21.164967   20272 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:18:21.164991   20272 retry.go:31] will retry after 2.674449049s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:23.840228   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:23.889469   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:23.889519   20272 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:23.889530   20272 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:18:23.889552   20272 retry.go:31] will retry after 3.630149312s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:27.521475   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:27.571446   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:27.571491   20272 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:27.571500   20272 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:18:27.571526   20272 retry.go:31] will retry after 5.289494884s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:32.862462   20272 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:18:32.916027   20272 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:18:32.916071   20272 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:18:32.916081   20272 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:18:32.916113   20272 oci.go:88] couldn't shut down multinode-948000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	 
	I0425 12:18:32.916187   20272 cli_runner.go:164] Run: docker rm -f -v multinode-948000
	I0425 12:18:32.966155   20272 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-948000
	W0425 12:18:33.014263   20272 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-948000 returned with exit code 1
	I0425 12:18:33.014376   20272 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:18:33.062862   20272 cli_runner.go:164] Run: docker network rm multinode-948000
	I0425 12:18:33.165759   20272 fix.go:124] Sleeping 1 second for extra luck!
	I0425 12:18:34.167919   20272 start.go:125] createHost starting for "" (driver="docker")
	I0425 12:18:34.213542   20272 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0425 12:18:34.213763   20272 start.go:159] libmachine.API.Create for "multinode-948000" (driver="docker")
	I0425 12:18:34.213792   20272 client.go:168] LocalClient.Create starting
	I0425 12:18:34.214021   20272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 12:18:34.214121   20272 main.go:141] libmachine: Decoding PEM data...
	I0425 12:18:34.214146   20272 main.go:141] libmachine: Parsing certificate...
	I0425 12:18:34.214225   20272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 12:18:34.214299   20272 main.go:141] libmachine: Decoding PEM data...
	I0425 12:18:34.214314   20272 main.go:141] libmachine: Parsing certificate...
	I0425 12:18:34.215213   20272 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 12:18:34.265634   20272 cli_runner.go:211] docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 12:18:34.265729   20272 network_create.go:281] running [docker network inspect multinode-948000] to gather additional debugging logs...
	I0425 12:18:34.265747   20272 cli_runner.go:164] Run: docker network inspect multinode-948000
	W0425 12:18:34.313699   20272 cli_runner.go:211] docker network inspect multinode-948000 returned with exit code 1
	I0425 12:18:34.313722   20272 network_create.go:284] error running [docker network inspect multinode-948000]: docker network inspect multinode-948000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-948000 not found
	I0425 12:18:34.313744   20272 network_create.go:286] output of [docker network inspect multinode-948000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-948000 not found
	
	** /stderr **
	I0425 12:18:34.313874   20272 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:18:34.363580   20272 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:18:34.365008   20272 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:18:34.366447   20272 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:18:34.367033   20272 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023794c0}
	I0425 12:18:34.367052   20272 network_create.go:124] attempt to create docker network multinode-948000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0425 12:18:34.367188   20272 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000
	W0425 12:18:34.415476   20272 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000 returned with exit code 1
	W0425 12:18:34.415524   20272 network_create.go:149] failed to create docker network multinode-948000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0425 12:18:34.415542   20272 network_create.go:116] failed to create docker network multinode-948000 192.168.76.0/24, will retry: subnet is taken
	I0425 12:18:34.417147   20272 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:18:34.418579   20272 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002215c90}
	I0425 12:18:34.418600   20272 network_create.go:124] attempt to create docker network multinode-948000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0425 12:18:34.418677   20272 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000
	I0425 12:18:34.501972   20272 network_create.go:108] docker network multinode-948000 192.168.85.0/24 created
	I0425 12:18:34.502004   20272 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-948000" container
	I0425 12:18:34.502110   20272 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 12:18:34.550560   20272 cli_runner.go:164] Run: docker volume create multinode-948000 --label name.minikube.sigs.k8s.io=multinode-948000 --label created_by.minikube.sigs.k8s.io=true
	I0425 12:18:34.598020   20272 oci.go:103] Successfully created a docker volume multinode-948000
	I0425 12:18:34.598141   20272 cli_runner.go:164] Run: docker run --rm --name multinode-948000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-948000 --entrypoint /usr/bin/test -v multinode-948000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 12:18:34.845615   20272 oci.go:107] Successfully prepared a docker volume multinode-948000
	I0425 12:18:34.845658   20272 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:18:34.845671   20272 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 12:18:34.845774   20272 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-948000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 12:24:34.217373   20272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:24:34.217539   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:34.268534   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:34.268641   20272 retry.go:31] will retry after 275.880185ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:34.546885   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:34.597357   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:34.597458   20272 retry.go:31] will retry after 508.546162ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:35.106718   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:35.159192   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:35.159286   20272 retry.go:31] will retry after 481.671524ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:35.643305   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:35.695839   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:24:35.695969   20272 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:24:35.695989   20272 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:35.696049   20272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 12:24:35.696114   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:35.745072   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:35.745167   20272 retry.go:31] will retry after 189.693573ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:35.937197   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:35.988045   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:35.988146   20272 retry.go:31] will retry after 407.343ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:36.396567   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:36.449616   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:36.449713   20272 retry.go:31] will retry after 372.152845ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:36.823176   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:36.875395   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:24:36.875504   20272 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:24:36.875520   20272 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:36.875528   20272 start.go:128] duration metric: took 6m2.706331683s to createHost
	I0425 12:24:36.875593   20272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:24:36.875651   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:36.923388   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:36.923500   20272 retry.go:31] will retry after 261.942685ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:37.187700   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:37.238211   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:37.238301   20272 retry.go:31] will retry after 345.69565ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:37.585518   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:37.638690   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:37.638787   20272 retry.go:31] will retry after 706.464169ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:38.345815   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:38.398349   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:24:38.398447   20272 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:24:38.398466   20272 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:38.398523   20272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 12:24:38.398578   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:38.447778   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:38.447872   20272 retry.go:31] will retry after 153.53272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:38.603806   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:38.656699   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:38.656794   20272 retry.go:31] will retry after 552.275087ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:39.209681   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:39.259001   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:24:39.259101   20272 retry.go:31] will retry after 586.102608ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:39.845768   20272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:24:39.895404   20272 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:24:39.895505   20272 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:24:39.895523   20272 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:24:39.895534   20272 fix.go:56] duration metric: took 6m24.293092624s for fixHost
	I0425 12:24:39.895540   20272 start.go:83] releasing machines lock for "multinode-948000", held for 6m24.293169578s
	W0425 12:24:39.895614   20272 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-948000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-948000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0425 12:24:39.938913   20272 out.go:177] 
	W0425 12:24:39.960043   20272 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0425 12:24:39.960091   20272 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0425 12:24:39.960119   20272 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0425 12:24:39.981655   20272 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-948000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (112.992907ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:24:40.282613   21003 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (754.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (110.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (118.282976ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-948000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- rollout status deployment/busybox: exit status 1 (108.271081ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.370021ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.335259ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (114.591487ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.379435ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.798514ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.149963ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (113.773191ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0425 12:25:08.988177    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.763027ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.108947ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.570977ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (114.004864ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (134.076216ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- exec  -- nslookup kubernetes.io: exit status 1 (108.960922ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- exec  -- nslookup kubernetes.default: exit status 1 (108.994803ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (110.864952ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (113.289957ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:26:30.682644   21093 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (110.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-948000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (108.213029ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-948000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (113.662229ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:26:30.956787   21102 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-948000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-948000 -v 3 --alsologtostderr: exit status 80 (199.690937ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:31.021424   21106 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:31.022367   21106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:31.022376   21106 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:31.022386   21106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:31.022582   21106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:31.022899   21106 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:31.023189   21106 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:31.023577   21106 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:31.071086   21106 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:31.093017   21106 out.go:177] 
	W0425 12:26:31.114081   21106 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-948000 host status: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-948000 host status: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	I0425 12:26:31.134734   21106 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-948000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (114.438826ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:26:31.323672   21112 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-948000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-948000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.3211ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-948000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-948000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-948000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (139.353098ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:26:31.551844   21119 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-948000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-636000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-948000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (114.153591ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:26:31.904755   21131 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status --output json --alsologtostderr: exit status 7 (113.3417ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-948000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:31.968210   21135 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:31.968435   21135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:31.968441   21135 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:31.968444   21135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:31.968616   21135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:31.968798   21135 out.go:298] Setting JSON to true
	I0425 12:26:31.968825   21135 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:31.968871   21135 notify.go:220] Checking for updates...
	I0425 12:26:31.969830   21135 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:31.969906   21135 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:31.970512   21135 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:32.018151   21135 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:32.018215   21135 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:32.018234   21135 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:32.018253   21135 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:32.018260   21135 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-948000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (113.758268ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:26:32.183971   21141 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 node stop m03: exit status 85 (166.686393ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-948000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status: exit status 7 (114.09262ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:26:32.465416   21147 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:32.465427   21147 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr: exit status 7 (113.873726ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:32.529480   21151 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:32.529753   21151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:32.529759   21151 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:32.529763   21151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:32.529925   21151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:32.530094   21151 out.go:298] Setting JSON to false
	I0425 12:26:32.530124   21151 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:32.530166   21151 notify.go:220] Checking for updates...
	I0425 12:26:32.530406   21151 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:32.530421   21151 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:32.530799   21151 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:32.579321   21151 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:32.579372   21151 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:32.579390   21151 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:32.579406   21151 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:32.579414   21151 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr": multinode-948000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr": multinode-948000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr": multinode-948000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (113.548583ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:26:32.744338   21157 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 node start m03 -v=7 --alsologtostderr: exit status 85 (154.437991ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:32.808383   21161 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:32.809103   21161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:32.809112   21161 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:32.809118   21161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:32.809668   21161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:32.810028   21161 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:32.810285   21161 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:32.831340   21161 out.go:177] 
	W0425 12:26:32.851960   21161 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0425 12:26:32.851978   21161 out.go:239] * 
	* 
	W0425 12:26:32.856660   21161 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 12:26:32.877208   21161 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0425 12:26:32.808383   21161 out.go:291] Setting OutFile to fd 1 ...
I0425 12:26:32.809103   21161 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 12:26:32.809112   21161 out.go:304] Setting ErrFile to fd 2...
I0425 12:26:32.809118   21161 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 12:26:32.809668   21161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
I0425 12:26:32.810028   21161 mustload.go:65] Loading cluster: multinode-948000
I0425 12:26:32.810285   21161 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 12:26:32.831340   21161 out.go:177] 
W0425 12:26:32.851960   21161 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0425 12:26:32.851978   21161 out.go:239] * 
* 
W0425 12:26:32.856660   21161 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0425 12:26:32.877208   21161 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-948000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (113.585169ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:32.963437   21163 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:32.963628   21163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:32.963634   21163 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:32.963637   21163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:32.963813   21163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:32.963992   21163 out.go:298] Setting JSON to false
	I0425 12:26:32.964014   21163 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:32.964055   21163 notify.go:220] Checking for updates...
	I0425 12:26:32.964305   21163 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:32.964319   21163 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:32.964701   21163 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:33.012676   21163 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:33.012737   21163 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:33.012757   21163 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:33.012778   21163 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:33.012786   21163 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (115.737493ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:33.679136   21167 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:33.679349   21167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:33.679355   21167 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:33.679358   21167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:33.679548   21167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:33.679727   21167 out.go:298] Setting JSON to false
	I0425 12:26:33.679750   21167 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:33.679787   21167 notify.go:220] Checking for updates...
	I0425 12:26:33.680061   21167 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:33.680074   21167 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:33.680450   21167 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:33.728160   21167 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:33.728216   21167 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:33.728240   21167 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:33.728258   21167 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:33.728266   21167 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (118.345941ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:36.035953   21173 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:36.036158   21173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:36.036163   21173 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:36.036167   21173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:36.036362   21173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:36.036537   21173 out.go:298] Setting JSON to false
	I0425 12:26:36.036559   21173 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:36.036600   21173 notify.go:220] Checking for updates...
	I0425 12:26:36.036842   21173 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:36.036855   21173 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:36.037223   21173 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:36.086274   21173 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:36.086338   21173 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:36.086361   21173 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:36.086379   21173 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:36.086386   21173 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (115.748897ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:38.264860   21177 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:38.265402   21177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:38.265509   21177 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:38.265522   21177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:38.266154   21177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:38.266346   21177 out.go:298] Setting JSON to false
	I0425 12:26:38.266369   21177 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:38.266407   21177 notify.go:220] Checking for updates...
	I0425 12:26:38.266657   21177 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:38.266671   21177 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:38.267050   21177 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:38.315943   21177 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:38.316009   21177 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:38.316027   21177 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:38.316047   21177 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:38.316057   21177 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (115.122045ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:43.005356   21188 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:43.005670   21188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:43.005677   21188 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:43.005680   21188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:43.005861   21188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:43.006044   21188 out.go:298] Setting JSON to false
	I0425 12:26:43.006066   21188 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:43.006107   21188 notify.go:220] Checking for updates...
	I0425 12:26:43.006360   21188 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:43.006375   21188 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:43.006757   21188 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:43.054500   21188 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:43.054560   21188 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:43.054586   21188 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:43.054604   21188 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:43.054611   21188 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (115.897018ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:48.587116   21196 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:48.587331   21196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:48.587337   21196 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:48.587340   21196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:48.587521   21196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:48.587689   21196 out.go:298] Setting JSON to false
	I0425 12:26:48.587720   21196 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:48.587761   21196 notify.go:220] Checking for updates...
	I0425 12:26:48.587995   21196 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:48.588010   21196 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:48.588392   21196 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:48.637178   21196 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:48.637234   21196 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:48.637254   21196 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:48.637271   21196 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:48.637277   21196 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (115.81858ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:26:53.395961   21201 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:26:53.396729   21201 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:53.396738   21201 out.go:304] Setting ErrFile to fd 2...
	I0425 12:26:53.396744   21201 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:26:53.397261   21201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:26:53.397466   21201 out.go:298] Setting JSON to false
	I0425 12:26:53.397490   21201 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:26:53.397530   21201 notify.go:220] Checking for updates...
	I0425 12:26:53.397747   21201 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:53.397761   21201 status.go:255] checking status of multinode-948000 ...
	I0425 12:26:53.398131   21201 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:26:53.446949   21201 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:26:53.447002   21201 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:26:53.447023   21201 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:26:53.447040   21201 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:26:53.447048   21201 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (116.99794ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:04.780596   21206 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:04.780868   21206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:04.780873   21206 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:04.780877   21206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:04.781053   21206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:27:04.781225   21206 out.go:298] Setting JSON to false
	I0425 12:27:04.781247   21206 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:27:04.781289   21206 notify.go:220] Checking for updates...
	I0425 12:27:04.781523   21206 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:04.781538   21206 status.go:255] checking status of multinode-948000 ...
	I0425 12:27:04.781934   21206 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:04.829774   21206 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:04.829839   21206 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:27:04.829856   21206 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:27:04.829874   21206 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:27:04.829881   21206 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr: exit status 7 (121.427302ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:15.622467   21211 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:15.622702   21211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:15.622707   21211 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:15.622711   21211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:15.622883   21211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:27:15.623068   21211 out.go:298] Setting JSON to false
	I0425 12:27:15.623095   21211 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:27:15.623131   21211 notify.go:220] Checking for updates...
	I0425 12:27:15.623818   21211 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:15.623846   21211 status.go:255] checking status of multinode-948000 ...
	I0425 12:27:15.624627   21211 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:15.675943   21211 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:15.676011   21211 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:27:15.676031   21211 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:27:15.676051   21211 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:27:15.676063   21211 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-948000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "2df7cdd9ba16c611162789505bb6ec50480a79f722b8e718632ad6626b1dfce8",
	        "Created": "2024-04-25T19:18:34.463396741Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (114.904565ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:27:15.842924   21217 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (43.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (791.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-948000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-948000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-948000: exit status 82 (13.765900772s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-948000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-948000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-948000 --wait=true -v=8 --alsologtostderr
E0425 12:28:45.983774    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:29:02.933123    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:30:09.007112    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:34:02.956796    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:34:52.066950    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:35:09.012348    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:39:02.957058    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:40:09.011586    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-948000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m56.963010365s)

                                                
                                                
-- stdout --
	* [multinode-948000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-948000" primary control-plane node in "multinode-948000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* docker "multinode-948000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-948000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:29.740412   21240 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:29.740617   21240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:29.740622   21240 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:29.740626   21240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:29.740832   21240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:27:29.742465   21240 out.go:298] Setting JSON to false
	I0425 12:27:29.765195   21240 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10620,"bootTime":1714062629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 12:27:29.765303   21240 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 12:27:29.786909   21240 out.go:177] * [multinode-948000] minikube v1.33.0 on Darwin 14.4.1
	I0425 12:27:29.828533   21240 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 12:27:29.828591   21240 notify.go:220] Checking for updates...
	I0425 12:27:29.849552   21240 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 12:27:29.870665   21240 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 12:27:29.891458   21240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 12:27:29.912586   21240 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 12:27:29.933557   21240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 12:27:29.955322   21240 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:29.955486   21240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 12:27:30.010772   21240 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 12:27:30.010934   21240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:27:30.167796   21240 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:125 SystemTime:2024-04-25 19:27:30.144355358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:27:30.212117   21240 out.go:177] * Using the docker driver based on existing profile
	I0425 12:27:30.232339   21240 start.go:297] selected driver: docker
	I0425 12:27:30.232373   21240 start.go:901] validating driver "docker" against &{Name:multinode-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-948000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:27:30.232495   21240 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 12:27:30.232693   21240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:27:30.341001   21240 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:125 SystemTime:2024-04-25 19:27:30.330462207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:27:30.344003   21240 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:27:30.344038   21240 cni.go:84] Creating CNI manager for ""
	I0425 12:27:30.344047   21240 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0425 12:27:30.344120   21240 start.go:340] cluster config:
	{Name:multinode-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:27:30.386612   21240 out.go:177] * Starting "multinode-948000" primary control-plane node in "multinode-948000" cluster
	I0425 12:27:30.407564   21240 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 12:27:30.428607   21240 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0425 12:27:30.470457   21240 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:27:30.470508   21240 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 12:27:30.470533   21240 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:27:30.470555   21240 cache.go:56] Caching tarball of preloaded images
	I0425 12:27:30.470784   21240 preload.go:173] Found /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:27:30.470806   21240 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:27:30.470976   21240 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/multinode-948000/config.json ...
	I0425 12:27:30.522208   21240 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0425 12:27:30.522231   21240 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0425 12:27:30.522251   21240 cache.go:194] Successfully downloaded all kic artifacts
	I0425 12:27:30.522295   21240 start.go:360] acquireMachinesLock for multinode-948000: {Name:mkc22316bab7a305bfcfe18e5a80258ef7beb819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:27:30.522395   21240 start.go:364] duration metric: took 82.07µs to acquireMachinesLock for "multinode-948000"
	I0425 12:27:30.522420   21240 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:27:30.522432   21240 fix.go:54] fixHost starting: 
	I0425 12:27:30.522661   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:30.571259   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:30.571339   21240 fix.go:112] recreateIfNeeded on multinode-948000: state= err=unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:30.571357   21240 fix.go:117] machineExists: false. err=machine does not exist
	I0425 12:27:30.593218   21240 out.go:177] * docker "multinode-948000" container is missing, will recreate.
	I0425 12:27:30.634687   21240 delete.go:124] DEMOLISHING multinode-948000 ...
	I0425 12:27:30.634826   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:30.682433   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	W0425 12:27:30.682481   21240 stop.go:83] unable to get state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:30.682497   21240 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:30.682878   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:30.731078   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:30.731131   21240 delete.go:82] Unable to get host status for multinode-948000, assuming it has already been deleted: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:30.731210   21240 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-948000
	W0425 12:27:30.778719   21240 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-948000 returned with exit code 1
	I0425 12:27:30.778751   21240 kic.go:371] could not find the container multinode-948000 to remove it. will try anyways
	I0425 12:27:30.778833   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:30.827793   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	W0425 12:27:30.827841   21240 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:30.827916   21240 cli_runner.go:164] Run: docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0"
	W0425 12:27:30.875825   21240 cli_runner.go:211] docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0425 12:27:30.875856   21240 oci.go:650] error shutdown multinode-948000: docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:31.876624   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:31.928503   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:31.928553   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:31.928565   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:27:31.928604   21240 retry.go:31] will retry after 320.966823ms: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:32.251896   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:32.304716   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:32.304758   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:32.304766   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:27:32.304794   21240 retry.go:31] will retry after 862.720976ms: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:33.167812   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:33.219711   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:33.219756   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:33.219770   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:27:33.219797   21240 retry.go:31] will retry after 862.096328ms: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:34.082886   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:34.134767   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:34.134810   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:34.134817   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:27:34.134844   21240 retry.go:31] will retry after 2.108684167s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:36.245865   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:36.296119   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:36.296164   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:36.296178   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:27:36.296203   21240 retry.go:31] will retry after 3.634733291s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:39.932546   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:39.983085   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:39.983127   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:39.983135   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:27:39.983157   21240 retry.go:31] will retry after 3.221564715s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:43.206132   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:43.256933   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:43.256980   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:43.256991   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:27:43.257025   21240 retry.go:31] will retry after 4.704438298s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:47.963893   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:27:48.016095   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:27:48.016137   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:27:48.016148   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:27:48.016180   21240 oci.go:88] couldn't shut down multinode-948000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	 
	I0425 12:27:48.016259   21240 cli_runner.go:164] Run: docker rm -f -v multinode-948000
	I0425 12:27:48.066636   21240 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-948000
	W0425 12:27:48.113973   21240 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-948000 returned with exit code 1
	I0425 12:27:48.114092   21240 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:27:48.162357   21240 cli_runner.go:164] Run: docker network rm multinode-948000
	I0425 12:27:48.267904   21240 fix.go:124] Sleeping 1 second for extra luck!
	I0425 12:27:49.270057   21240 start.go:125] createHost starting for "" (driver="docker")
	I0425 12:27:49.292995   21240 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0425 12:27:49.293166   21240 start.go:159] libmachine.API.Create for "multinode-948000" (driver="docker")
	I0425 12:27:49.293233   21240 client.go:168] LocalClient.Create starting
	I0425 12:27:49.293459   21240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 12:27:49.293555   21240 main.go:141] libmachine: Decoding PEM data...
	I0425 12:27:49.293596   21240 main.go:141] libmachine: Parsing certificate...
	I0425 12:27:49.293724   21240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 12:27:49.293804   21240 main.go:141] libmachine: Decoding PEM data...
	I0425 12:27:49.293820   21240 main.go:141] libmachine: Parsing certificate...
	I0425 12:27:49.314336   21240 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 12:27:49.364763   21240 cli_runner.go:211] docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 12:27:49.364847   21240 network_create.go:281] running [docker network inspect multinode-948000] to gather additional debugging logs...
	I0425 12:27:49.364866   21240 cli_runner.go:164] Run: docker network inspect multinode-948000
	W0425 12:27:49.412459   21240 cli_runner.go:211] docker network inspect multinode-948000 returned with exit code 1
	I0425 12:27:49.412487   21240 network_create.go:284] error running [docker network inspect multinode-948000]: docker network inspect multinode-948000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-948000 not found
	I0425 12:27:49.412500   21240 network_create.go:286] output of [docker network inspect multinode-948000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-948000 not found
	
	** /stderr **
	I0425 12:27:49.412650   21240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:27:49.462349   21240 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:27:49.463746   21240 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:27:49.464084   21240 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002339290}
	I0425 12:27:49.464101   21240 network_create.go:124] attempt to create docker network multinode-948000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0425 12:27:49.464169   21240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000
	I0425 12:27:49.547675   21240 network_create.go:108] docker network multinode-948000 192.168.67.0/24 created
	I0425 12:27:49.547786   21240 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-948000" container
	I0425 12:27:49.547896   21240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 12:27:49.596749   21240 cli_runner.go:164] Run: docker volume create multinode-948000 --label name.minikube.sigs.k8s.io=multinode-948000 --label created_by.minikube.sigs.k8s.io=true
	I0425 12:27:49.644055   21240 oci.go:103] Successfully created a docker volume multinode-948000
	I0425 12:27:49.644159   21240 cli_runner.go:164] Run: docker run --rm --name multinode-948000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-948000 --entrypoint /usr/bin/test -v multinode-948000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 12:27:49.885592   21240 oci.go:107] Successfully prepared a docker volume multinode-948000
	I0425 12:27:49.885645   21240 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:27:49.885658   21240 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 12:27:49.885750   21240 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-948000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 12:33:49.318087   21240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:33:49.318221   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:49.370580   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:49.370702   21240 retry.go:31] will retry after 235.942083ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:49.609010   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:49.661010   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:49.661115   21240 retry.go:31] will retry after 404.582399ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:50.068056   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:50.118997   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:50.119108   21240 retry.go:31] will retry after 554.103396ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:50.674913   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:50.728063   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:33:50.728165   21240 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:33:50.728189   21240 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:50.728252   21240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 12:33:50.728309   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:50.778253   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:50.778347   21240 retry.go:31] will retry after 322.532255ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:51.101500   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:51.152856   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:51.152954   21240 retry.go:31] will retry after 242.964474ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:51.398222   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:51.449901   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:51.450013   21240 retry.go:31] will retry after 645.623085ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:52.097208   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:52.149721   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:33:52.149825   21240 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:33:52.149838   21240 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:52.149855   21240 start.go:128] duration metric: took 6m2.85660719s to createHost
	I0425 12:33:52.149919   21240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:33:52.149982   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:52.198710   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:52.198799   21240 retry.go:31] will retry after 166.330701ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:52.366675   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:52.415238   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:52.415344   21240 retry.go:31] will retry after 496.542732ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:52.914251   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:52.965934   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:52.966036   21240 retry.go:31] will retry after 716.508877ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:53.683953   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:53.735795   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:33:53.735896   21240 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:33:53.735910   21240 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:53.735970   21240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 12:33:53.736029   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:53.785657   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:53.785759   21240 retry.go:31] will retry after 173.549201ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:53.959975   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:54.010738   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:54.010837   21240 retry.go:31] will retry after 553.686773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:54.566911   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:54.618098   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:33:54.618198   21240 retry.go:31] will retry after 555.182461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:55.174599   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:33:55.226082   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:33:55.226187   21240 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:33:55.226202   21240 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:33:55.226221   21240 fix.go:56] duration metric: took 6m24.680586295s for fixHost
	I0425 12:33:55.226227   21240 start.go:83] releasing machines lock for "multinode-948000", held for 6m24.680618258s
	W0425 12:33:55.226244   21240 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0425 12:33:55.226311   21240 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0425 12:33:55.226317   21240 start.go:728] Will try again in 5 seconds ...
	I0425 12:34:00.228556   21240 start.go:360] acquireMachinesLock for multinode-948000: {Name:mkc22316bab7a305bfcfe18e5a80258ef7beb819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:34:00.228794   21240 start.go:364] duration metric: took 194.829µs to acquireMachinesLock for "multinode-948000"
	I0425 12:34:00.228831   21240 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:34:00.228839   21240 fix.go:54] fixHost starting: 
	I0425 12:34:00.229313   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:00.281779   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:00.281829   21240 fix.go:112] recreateIfNeeded on multinode-948000: state= err=unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:00.281846   21240 fix.go:117] machineExists: false. err=machine does not exist
	I0425 12:34:00.324170   21240 out.go:177] * docker "multinode-948000" container is missing, will recreate.
	I0425 12:34:00.345164   21240 delete.go:124] DEMOLISHING multinode-948000 ...
	I0425 12:34:00.345412   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:00.395091   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	W0425 12:34:00.395137   21240 stop.go:83] unable to get state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:00.395157   21240 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:00.395532   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:00.444014   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:00.444078   21240 delete.go:82] Unable to get host status for multinode-948000, assuming it has already been deleted: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:00.444167   21240 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-948000
	W0425 12:34:00.492482   21240 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-948000 returned with exit code 1
	I0425 12:34:00.492514   21240 kic.go:371] could not find the container multinode-948000 to remove it. will try anyways
	I0425 12:34:00.492584   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:00.540969   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	W0425 12:34:00.541016   21240 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:00.541094   21240 cli_runner.go:164] Run: docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0"
	W0425 12:34:00.589124   21240 cli_runner.go:211] docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0425 12:34:00.589153   21240 oci.go:650] error shutdown multinode-948000: docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:01.589922   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:01.640314   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:01.640357   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:01.640368   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:34:01.640392   21240 retry.go:31] will retry after 443.944369ms: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:02.086697   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:02.138842   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:02.138884   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:02.138894   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:34:02.138917   21240 retry.go:31] will retry after 1.059225147s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:03.200479   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:03.253165   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:03.253216   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:03.253224   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:34:03.253250   21240 retry.go:31] will retry after 1.035087515s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:04.290766   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:04.343642   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:04.343689   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:04.343698   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:34:04.343722   21240 retry.go:31] will retry after 1.058758331s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:05.404005   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:05.457062   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:05.457108   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:05.457119   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:34:05.457146   21240 retry.go:31] will retry after 1.883692153s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:07.341819   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:07.391655   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:07.391699   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:07.391710   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:34:07.391745   21240 retry.go:31] will retry after 5.619402433s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:13.012401   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:13.064265   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:13.064314   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:13.064322   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:34:13.064341   21240 retry.go:31] will retry after 5.98183459s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:19.047510   21240 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:34:19.098468   21240 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:34:19.098511   21240 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:34:19.098520   21240 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:34:19.098553   21240 oci.go:88] couldn't shut down multinode-948000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	 
	I0425 12:34:19.098623   21240 cli_runner.go:164] Run: docker rm -f -v multinode-948000
	I0425 12:34:19.147286   21240 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-948000
	W0425 12:34:19.194329   21240 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-948000 returned with exit code 1
	I0425 12:34:19.194435   21240 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:34:19.242466   21240 cli_runner.go:164] Run: docker network rm multinode-948000
	I0425 12:34:19.340507   21240 fix.go:124] Sleeping 1 second for extra luck!
	I0425 12:34:20.342719   21240 start.go:125] createHost starting for "" (driver="docker")
	I0425 12:34:20.364633   21240 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0425 12:34:20.364835   21240 start.go:159] libmachine.API.Create for "multinode-948000" (driver="docker")
	I0425 12:34:20.364872   21240 client.go:168] LocalClient.Create starting
	I0425 12:34:20.365077   21240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 12:34:20.365172   21240 main.go:141] libmachine: Decoding PEM data...
	I0425 12:34:20.365202   21240 main.go:141] libmachine: Parsing certificate...
	I0425 12:34:20.365281   21240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 12:34:20.365357   21240 main.go:141] libmachine: Decoding PEM data...
	I0425 12:34:20.365373   21240 main.go:141] libmachine: Parsing certificate...
	I0425 12:34:20.366047   21240 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 12:34:20.415470   21240 cli_runner.go:211] docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 12:34:20.415552   21240 network_create.go:281] running [docker network inspect multinode-948000] to gather additional debugging logs...
	I0425 12:34:20.415571   21240 cli_runner.go:164] Run: docker network inspect multinode-948000
	W0425 12:34:20.462756   21240 cli_runner.go:211] docker network inspect multinode-948000 returned with exit code 1
	I0425 12:34:20.462783   21240 network_create.go:284] error running [docker network inspect multinode-948000]: docker network inspect multinode-948000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-948000 not found
	I0425 12:34:20.462796   21240 network_create.go:286] output of [docker network inspect multinode-948000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-948000 not found
	
	** /stderr **
	I0425 12:34:20.462943   21240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:34:20.512847   21240 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:34:20.514559   21240 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:34:20.516131   21240 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:34:20.516467   21240 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024b7d10}
	I0425 12:34:20.516480   21240 network_create.go:124] attempt to create docker network multinode-948000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0425 12:34:20.516555   21240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000
	W0425 12:34:20.564057   21240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000 returned with exit code 1
	W0425 12:34:20.564100   21240 network_create.go:149] failed to create docker network multinode-948000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0425 12:34:20.564117   21240 network_create.go:116] failed to create docker network multinode-948000 192.168.76.0/24, will retry: subnet is taken
	I0425 12:34:20.565457   21240 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:34:20.565821   21240 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00215f4a0}
	I0425 12:34:20.565833   21240 network_create.go:124] attempt to create docker network multinode-948000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0425 12:34:20.565901   21240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000
	I0425 12:34:20.649007   21240 network_create.go:108] docker network multinode-948000 192.168.85.0/24 created
	I0425 12:34:20.649042   21240 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-948000" container
	I0425 12:34:20.649153   21240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 12:34:20.697713   21240 cli_runner.go:164] Run: docker volume create multinode-948000 --label name.minikube.sigs.k8s.io=multinode-948000 --label created_by.minikube.sigs.k8s.io=true
	I0425 12:34:20.745278   21240 oci.go:103] Successfully created a docker volume multinode-948000
	I0425 12:34:20.745405   21240 cli_runner.go:164] Run: docker run --rm --name multinode-948000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-948000 --entrypoint /usr/bin/test -v multinode-948000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 12:34:20.982245   21240 oci.go:107] Successfully prepared a docker volume multinode-948000
	I0425 12:34:20.982276   21240 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:34:20.982290   21240 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 12:34:20.982381   21240 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-948000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0425 12:40:20.367561   21240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:40:20.367695   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:20.418407   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:20.418522   21240 retry.go:31] will retry after 205.670083ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:20.626557   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:20.681107   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:20.681217   21240 retry.go:31] will retry after 392.464997ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:21.076067   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:21.125945   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:21.126056   21240 retry.go:31] will retry after 304.795316ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:21.433251   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:21.486639   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:21.486737   21240 retry.go:31] will retry after 431.169702ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:21.918406   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:21.970065   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:40:21.970187   21240 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:40:21.970208   21240 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:21.970267   21240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 12:40:21.970332   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:22.017836   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:22.017934   21240 retry.go:31] will retry after 299.401461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:22.319668   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:22.372194   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:22.372297   21240 retry.go:31] will retry after 447.089397ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:22.821547   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:22.871840   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:22.871935   21240 retry.go:31] will retry after 495.497961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:23.369077   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:23.420067   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:40:23.420175   21240 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:40:23.420190   21240 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:23.420204   21240 start.go:128] duration metric: took 6m3.077169181s to createHost
	I0425 12:40:23.420288   21240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:40:23.420340   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:23.469141   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:23.469232   21240 retry.go:31] will retry after 181.008916ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:23.652573   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:23.704089   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:23.704181   21240 retry.go:31] will retry after 528.09895ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:24.234660   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:24.286814   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:24.286915   21240 retry.go:31] will retry after 517.555981ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:24.804867   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:24.855906   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:40:24.856019   21240 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:40:24.856036   21240 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:24.856088   21240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0425 12:40:24.856154   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:24.903801   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:24.903892   21240 retry.go:31] will retry after 156.897037ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:25.063202   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:25.114068   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:25.114174   21240 retry.go:31] will retry after 192.274172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:25.307624   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:25.361044   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:25.361142   21240 retry.go:31] will retry after 538.525721ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:25.901952   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:25.953441   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	I0425 12:40:25.953542   21240 retry.go:31] will retry after 508.099835ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:26.463995   21240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000
	W0425 12:40:26.515095   21240 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000 returned with exit code 1
	W0425 12:40:26.515193   21240 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	W0425 12:40:26.515213   21240 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-948000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:26.515221   21240 fix.go:56] duration metric: took 6m26.286090532s for fixHost
	I0425 12:40:26.515227   21240 start.go:83] releasing machines lock for "multinode-948000", held for 6m26.286127101s
	W0425 12:40:26.515308   21240 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-948000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-948000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0425 12:40:26.557689   21240 out.go:177] 
	W0425 12:40:26.578783   21240 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0425 12:40:26.578828   21240 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0425 12:40:26.578850   21240 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0425 12:40:26.599793   21240 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-948000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-948000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "cbc58f8a269f750e4ed57156958e659db823d6bdea89c248207f66d514014aa6",
	        "Created": "2024-04-25T19:34:20.610449756Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (114.067014ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:40:26.907335   21692 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (791.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 node delete m03: exit status 80 (197.899182ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-948000 host status: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-948000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr: exit status 7 (115.510653ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:40:27.171401   21700 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:40:27.171593   21700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:40:27.171599   21700 out.go:304] Setting ErrFile to fd 2...
	I0425 12:40:27.171603   21700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:40:27.171789   21700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:40:27.171983   21700 out.go:298] Setting JSON to false
	I0425 12:40:27.172007   21700 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:40:27.172049   21700 notify.go:220] Checking for updates...
	I0425 12:40:27.172317   21700 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:40:27.172331   21700 status.go:255] checking status of multinode-948000 ...
	I0425 12:40:27.172701   21700 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:27.220879   21700 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:27.220939   21700 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:40:27.220965   21700 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:40:27.220982   21700 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:40:27.220992   21700 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "cbc58f8a269f750e4ed57156958e659db823d6bdea89c248207f66d514014aa6",
	        "Created": "2024-04-25T19:34:20.610449756Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (114.188774ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:40:27.387493   21706 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (17.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 stop: exit status 82 (17.405320594s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	* Stopping node "multinode-948000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-948000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-948000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status: exit status 7 (113.039985ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:40:44.906196   21728 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:40:44.906207   21728 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr: exit status 7 (113.735619ms)

                                                
                                                
-- stdout --
	multinode-948000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:40:44.970459   21732 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:40:44.970665   21732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:40:44.970671   21732 out.go:304] Setting ErrFile to fd 2...
	I0425 12:40:44.970674   21732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:40:44.970854   21732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:40:44.971036   21732 out.go:298] Setting JSON to false
	I0425 12:40:44.971063   21732 mustload.go:65] Loading cluster: multinode-948000
	I0425 12:40:44.971101   21732 notify.go:220] Checking for updates...
	I0425 12:40:44.971380   21732 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:40:44.971393   21732 status.go:255] checking status of multinode-948000 ...
	I0425 12:40:44.971783   21732 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:45.019956   21732 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:45.020011   21732 status.go:330] multinode-948000 host status = "" (err=state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	)
	I0425 12:40:45.020031   21732 status.go:257] multinode-948000 status: &{Name:multinode-948000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 12:40:45.020051   21732 status.go:260] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	E0425 12:40:45.020059   21732 status.go:263] The "multinode-948000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr": multinode-948000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-948000 status --alsologtostderr": multinode-948000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "cbc58f8a269f750e4ed57156958e659db823d6bdea89c248207f66d514014aa6",
	        "Created": "2024-04-25T19:34:20.610449756Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (126.8274ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:40:45.198332   21738 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (17.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-948000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-948000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m20.88702309s)

                                                
                                                
-- stdout --
	* [multinode-948000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-948000" primary control-plane node in "multinode-948000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* docker "multinode-948000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:40:45.262849   21742 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:40:45.263109   21742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:40:45.263114   21742 out.go:304] Setting ErrFile to fd 2...
	I0425 12:40:45.263118   21742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:40:45.263266   21742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 12:40:45.264768   21742 out.go:298] Setting JSON to false
	I0425 12:40:45.286817   21742 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11416,"bootTime":1714062629,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 12:40:45.286908   21742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 12:40:45.309247   21742 out.go:177] * [multinode-948000] minikube v1.33.0 on Darwin 14.4.1
	I0425 12:40:45.350736   21742 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 12:40:45.350810   21742 notify.go:220] Checking for updates...
	I0425 12:40:45.371810   21742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 12:40:45.392909   21742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 12:40:45.434554   21742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 12:40:45.476706   21742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 12:40:45.520645   21742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 12:40:45.542060   21742 config.go:182] Loaded profile config "multinode-948000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:40:45.542668   21742 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 12:40:45.597367   21742 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 12:40:45.597533   21742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:40:45.710291   21742 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:145 SystemTime:2024-04-25 19:40:45.698120649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:40:45.732239   21742 out.go:177] * Using the docker driver based on existing profile
	I0425 12:40:45.774075   21742 start.go:297] selected driver: docker
	I0425 12:40:45.774104   21742 start.go:901] validating driver "docker" against &{Name:multinode-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-948000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:40:45.774220   21742 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 12:40:45.774382   21742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 12:40:45.887261   21742 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:145 SystemTime:2024-04-25 19:40:45.876646834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 12:40:45.890297   21742 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:40:45.890367   21742 cni.go:84] Creating CNI manager for ""
	I0425 12:40:45.890377   21742 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0425 12:40:45.890443   21742 start.go:340] cluster config:
	{Name:multinode-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:40:45.932471   21742 out.go:177] * Starting "multinode-948000" primary control-plane node in "multinode-948000" cluster
	I0425 12:40:45.953337   21742 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 12:40:45.974568   21742 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0425 12:40:46.016486   21742 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:40:46.016526   21742 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 12:40:46.016562   21742 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:40:46.016579   21742 cache.go:56] Caching tarball of preloaded images
	I0425 12:40:46.016792   21742 preload.go:173] Found /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:40:46.016815   21742 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:40:46.017819   21742 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/multinode-948000/config.json ...
	I0425 12:40:46.069339   21742 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0425 12:40:46.069353   21742 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0425 12:40:46.069384   21742 cache.go:194] Successfully downloaded all kic artifacts
	I0425 12:40:46.069417   21742 start.go:360] acquireMachinesLock for multinode-948000: {Name:mkc22316bab7a305bfcfe18e5a80258ef7beb819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:40:46.069518   21742 start.go:364] duration metric: took 82.694µs to acquireMachinesLock for "multinode-948000"
	I0425 12:40:46.069542   21742 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:40:46.069552   21742 fix.go:54] fixHost starting: 
	I0425 12:40:46.069814   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:46.117972   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:46.118022   21742 fix.go:112] recreateIfNeeded on multinode-948000: state= err=unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:46.118042   21742 fix.go:117] machineExists: false. err=machine does not exist
	I0425 12:40:46.139633   21742 out.go:177] * docker "multinode-948000" container is missing, will recreate.
	I0425 12:40:46.181420   21742 delete.go:124] DEMOLISHING multinode-948000 ...
	I0425 12:40:46.181589   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:46.230839   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	W0425 12:40:46.230894   21742 stop.go:83] unable to get state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:46.230913   21742 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:46.231273   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:46.278713   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:46.278769   21742 delete.go:82] Unable to get host status for multinode-948000, assuming it has already been deleted: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:46.278858   21742 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-948000
	W0425 12:40:46.326050   21742 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-948000 returned with exit code 1
	I0425 12:40:46.326089   21742 kic.go:371] could not find the container multinode-948000 to remove it. will try anyways
	I0425 12:40:46.326185   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:46.373987   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	W0425 12:40:46.374034   21742 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:46.374122   21742 cli_runner.go:164] Run: docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0"
	W0425 12:40:46.422198   21742 cli_runner.go:211] docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0425 12:40:46.422230   21742 oci.go:650] error shutdown multinode-948000: docker exec --privileged -t multinode-948000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:47.423056   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:47.474625   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:47.474680   21742 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:47.474689   21742 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:40:47.474727   21742 retry.go:31] will retry after 420.887911ms: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:47.896251   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:47.946255   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:47.946307   21742 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:47.946319   21742 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:40:47.946345   21742 retry.go:31] will retry after 1.0420892s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:48.989914   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:49.042163   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:49.042209   21742 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:49.042216   21742 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:40:49.042241   21742 retry.go:31] will retry after 1.60972623s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:50.654402   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:50.703966   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:50.704012   21742 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:50.704020   21742 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:40:50.704043   21742 retry.go:31] will retry after 1.982533762s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:52.689054   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:52.739908   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:52.739950   21742 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:52.739957   21742 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:40:52.739981   21742 retry.go:31] will retry after 2.487478543s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:55.229829   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:55.293790   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:55.293835   21742 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:55.293843   21742 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:40:55.293862   21742 retry.go:31] will retry after 2.95315256s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:58.248898   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:40:58.300184   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:40:58.300226   21742 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:40:58.300234   21742 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:40:58.300259   21742 retry.go:31] will retry after 4.508182915s: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:41:02.809799   21742 cli_runner.go:164] Run: docker container inspect multinode-948000 --format={{.State.Status}}
	W0425 12:41:02.862816   21742 cli_runner.go:211] docker container inspect multinode-948000 --format={{.State.Status}} returned with exit code 1
	I0425 12:41:02.862863   21742 oci.go:662] temporary error verifying shutdown: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	I0425 12:41:02.862872   21742 oci.go:664] temporary error: container multinode-948000 status is  but expect it to be exited
	I0425 12:41:02.862901   21742 oci.go:88] couldn't shut down multinode-948000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000
	 
	I0425 12:41:02.862976   21742 cli_runner.go:164] Run: docker rm -f -v multinode-948000
	I0425 12:41:02.912309   21742 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-948000
	W0425 12:41:02.960606   21742 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-948000 returned with exit code 1
	I0425 12:41:02.960711   21742 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:41:03.009251   21742 cli_runner.go:164] Run: docker network rm multinode-948000
	I0425 12:41:03.114444   21742 fix.go:124] Sleeping 1 second for extra luck!
	I0425 12:41:04.115944   21742 start.go:125] createHost starting for "" (driver="docker")
	I0425 12:41:04.137780   21742 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0425 12:41:04.137907   21742 start.go:159] libmachine.API.Create for "multinode-948000" (driver="docker")
	I0425 12:41:04.137941   21742 client.go:168] LocalClient.Create starting
	I0425 12:41:04.138089   21742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/ca.pem
	I0425 12:41:04.138167   21742 main.go:141] libmachine: Decoding PEM data...
	I0425 12:41:04.138193   21742 main.go:141] libmachine: Parsing certificate...
	I0425 12:41:04.138259   21742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-9222/.minikube/certs/cert.pem
	I0425 12:41:04.138309   21742 main.go:141] libmachine: Decoding PEM data...
	I0425 12:41:04.138319   21742 main.go:141] libmachine: Parsing certificate...
	I0425 12:41:04.138800   21742 cli_runner.go:164] Run: docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0425 12:41:04.189632   21742 cli_runner.go:211] docker network inspect multinode-948000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0425 12:41:04.189729   21742 network_create.go:281] running [docker network inspect multinode-948000] to gather additional debugging logs...
	I0425 12:41:04.189752   21742 cli_runner.go:164] Run: docker network inspect multinode-948000
	W0425 12:41:04.237618   21742 cli_runner.go:211] docker network inspect multinode-948000 returned with exit code 1
	I0425 12:41:04.237651   21742 network_create.go:284] error running [docker network inspect multinode-948000]: docker network inspect multinode-948000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-948000 not found
	I0425 12:41:04.237664   21742 network_create.go:286] output of [docker network inspect multinode-948000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-948000 not found
	
	** /stderr **
	I0425 12:41:04.237817   21742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0425 12:41:04.287661   21742 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:41:04.289077   21742 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 12:41:04.289399   21742 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00248b270}
	I0425 12:41:04.289415   21742 network_create.go:124] attempt to create docker network multinode-948000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0425 12:41:04.289482   21742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-948000 multinode-948000
	I0425 12:41:04.373411   21742 network_create.go:108] docker network multinode-948000 192.168.67.0/24 created
	I0425 12:41:04.373448   21742 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-948000" container
	I0425 12:41:04.373559   21742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0425 12:41:04.422680   21742 cli_runner.go:164] Run: docker volume create multinode-948000 --label name.minikube.sigs.k8s.io=multinode-948000 --label created_by.minikube.sigs.k8s.io=true
	I0425 12:41:04.470605   21742 oci.go:103] Successfully created a docker volume multinode-948000
	I0425 12:41:04.470716   21742 cli_runner.go:164] Run: docker run --rm --name multinode-948000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-948000 --entrypoint /usr/bin/test -v multinode-948000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0425 12:41:04.716423   21742 oci.go:107] Successfully prepared a docker volume multinode-948000
	I0425 12:41:04.716462   21742 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:41:04.716475   21742 kic.go:194] Starting extracting preloaded images to volume ...
	I0425 12:41:04.716594   21742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-948000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-948000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-948000
helpers_test.go:235: (dbg) docker inspect multinode-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-948000",
	        "Id": "b43b1178991335bdb6a20b8d26b4b8435d32fb20774af825af6ea650fe88b74b",
	        "Created": "2024-04-25T19:41:04.33431154Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-948000 -n multinode-948000: exit status 7 (115.454089ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:42:06.261001   21848 status.go:249] status error: host: state: unknown state "multinode-948000": docker container inspect multinode-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-948000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (81.06s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-948000 --memory=2048 --driver=docker 
E0425 12:44:02.957320    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:45:09.011703    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:45:26.009586    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-948000 --memory=2048 --driver=docker : signal: killed (5m0.005545666s)

                                                
                                                
-- stdout --
	* [scheduled-stop-948000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-948000" primary control-plane node in "scheduled-stop-948000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-948000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-948000" primary control-plane node in "scheduled-stop-948000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-25 12:48:55.104921 -0700 PDT m=+4684.139939703
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-948000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-948000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-948000",
	        "Id": "208ca4dbe80b292a1b4b349720ce0e5dc29c9b884f1cce51a285536c5a0c51bb",
	        "Created": "2024-04-25T19:43:56.206923523Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-948000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-948000 -n scheduled-stop-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-948000 -n scheduled-stop-948000: exit status 7 (113.994362ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:48:55.270873   22355 status.go:249] status error: host: state: unknown state "scheduled-stop-948000": docker container inspect scheduled-stop-948000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-948000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-948000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-948000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-948000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.9s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe277801991 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe277801991 version: (1.527129968s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-132000 --memory=2600 --driver=docker 
E0425 12:49:02.957562    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:50:09.011740    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 12:51:32.070011    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-132000 --memory=2600 --driver=docker : signal: killed (4m56.871540059s)

                                                
                                                
-- stdout --
	* [skaffold-132000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-132000" primary control-plane node in "skaffold-132000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-132000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-132000" primary control-plane node in "skaffold-132000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-04-25 12:53:56.057489 -0700 PDT m=+4985.031409541
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-132000
helpers_test.go:235: (dbg) docker inspect skaffold-132000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-132000",
	        "Id": "5fa81ebba1fb6ca268bdc6426cd8d01c8123b808d68f840a04b189dd9d6811fc",
	        "Created": "2024-04-25T19:49:00.29088896Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-132000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-132000 -n skaffold-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-132000 -n skaffold-132000: exit status 7 (116.980499ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:53:56.228304   22507 status.go:249] status error: host: state: unknown state "skaffold-132000": docker container inspect skaffold-132000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-132000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-132000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-132000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-132000
--- FAIL: TestSkaffold (300.90s)

                                                
                                    
x
+
TestInsufficientStorage (300.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-266000 --memory=2048 --output=json --wait=true --driver=docker 
E0425 12:54:03.018785    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 12:55:09.074225    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-266000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.004801637s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"589899d6-ac7a-46bb-ac2e-10001b2c686e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-266000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1618f976-c030-4971-b622-9626d2a51a83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18757"}}
	{"specversion":"1.0","id":"1e0b4769-13f9-4369-bb87-d60b995d7655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig"}}
	{"specversion":"1.0","id":"de4487df-90df-4e83-a9f0-244c9c2647f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"408959f0-e75f-43ab-aeca-409c34f90f25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed726fd4-8aed-4dc3-ae6b-a517bd382ce1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube"}}
	{"specversion":"1.0","id":"7323983b-c264-470f-9fd3-8dcfad9f3876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"35b2ae03-365d-49aa-8dc0-8467e7c81116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6d4f50d9-1c20-45a1-9453-ce3430934ea7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"912c312c-586a-4856-9e65-f4bf50e954a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"33678cfa-89d2-4659-91fb-f49f3c20557a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"a27ea774-e08a-43bf-b142-fc62cb9ef829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-266000\" primary control-plane node in \"insufficient-storage-266000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8140835c-af51-401b-9c96-edec62eb6796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713736339-18706 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f91608b1-b34f-43a3-b923-099cea352b32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-266000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-266000 --output=json --layout=cluster: context deadline exceeded (580ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-266000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-266000
--- FAIL: TestInsufficientStorage (300.72s)

                                                
                                    

Test pass (169/208)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.12
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.32
9 TestDownloadOnly/v1.20.0/DeleteAll 0.62
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.0/json-events 7.74
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.3
18 TestDownloadOnly/v1.30.0/DeleteAll 0.65
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.37
20 TestDownloadOnlyKic 1.85
21 TestBinaryMirror 1.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.15
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
27 TestAddons/Setup 153.09
31 TestAddons/parallel/InspektorGadget 10.76
32 TestAddons/parallel/MetricsServer 5.72
33 TestAddons/parallel/HelmTiller 10.35
35 TestAddons/parallel/CSI 88.98
36 TestAddons/parallel/Headlamp 12.21
37 TestAddons/parallel/CloudSpanner 5.68
38 TestAddons/parallel/LocalPath 54.01
39 TestAddons/parallel/NvidiaDevicePlugin 5.61
40 TestAddons/parallel/Yakd 5
43 TestAddons/serial/GCPAuth/Namespaces 0.11
44 TestAddons/StoppedEnableDisable 11.68
52 TestHyperKitDriverInstallOrUpdate 8.11
55 TestErrorSpam/setup 20.04
56 TestErrorSpam/start 2.09
57 TestErrorSpam/status 1.16
58 TestErrorSpam/pause 1.6
59 TestErrorSpam/unpause 1.66
60 TestErrorSpam/stop 11.38
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 74.11
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 35.94
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.06
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
72 TestFunctional/serial/CacheCmd/cache/add_local 1.58
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
74 TestFunctional/serial/CacheCmd/cache/list 0.09
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.88
77 TestFunctional/serial/CacheCmd/cache/delete 0.18
78 TestFunctional/serial/MinikubeKubectlCmd 0.98
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.45
80 TestFunctional/serial/ExtraConfig 40.07
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 3.01
83 TestFunctional/serial/LogsFileCmd 2.91
84 TestFunctional/serial/InvalidService 4.02
86 TestFunctional/parallel/ConfigCmd 0.53
87 TestFunctional/parallel/DashboardCmd 12.17
88 TestFunctional/parallel/DryRun 1.83
89 TestFunctional/parallel/InternationalLanguage 0.71
90 TestFunctional/parallel/StatusCmd 1.18
95 TestFunctional/parallel/AddonsCmd 0.27
96 TestFunctional/parallel/PersistentVolumeClaim 27.67
98 TestFunctional/parallel/SSHCmd 0.73
99 TestFunctional/parallel/CpCmd 2.66
100 TestFunctional/parallel/MySQL 31.75
101 TestFunctional/parallel/FileSync 0.44
102 TestFunctional/parallel/CertSync 2.62
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
110 TestFunctional/parallel/License 0.68
111 TestFunctional/parallel/Version/short 0.11
112 TestFunctional/parallel/Version/components 0.71
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
117 TestFunctional/parallel/ImageCommands/ImageBuild 2.59
118 TestFunctional/parallel/ImageCommands/Setup 2.56
119 TestFunctional/parallel/DockerEnv/bash 2.13
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.24
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.37
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.4
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.72
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.74
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.38
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.59
130 TestFunctional/parallel/ServiceCmd/DeployApp 17.13
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
133 TestFunctional/parallel/ServiceCmd/List 0.78
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.15
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
138 TestFunctional/parallel/ServiceCmd/HTTPS 15
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
145 TestFunctional/parallel/ServiceCmd/Format 15.01
146 TestFunctional/parallel/ServiceCmd/URL 15
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.74
148 TestFunctional/parallel/MountCmd/any-port 8.98
149 TestFunctional/parallel/ProfileCmd/profile_list 0.6
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
151 TestFunctional/parallel/MountCmd/specific-port 2.38
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.83
153 TestFunctional/delete_addon-resizer_images 0.12
154 TestFunctional/delete_my-image_image 0.05
155 TestFunctional/delete_minikube_cached_images 0.05
159 TestMultiControlPlane/serial/StartCluster 97.46
160 TestMultiControlPlane/serial/DeployApp 5.33
161 TestMultiControlPlane/serial/PingHostFromPods 1.38
162 TestMultiControlPlane/serial/AddWorkerNode 17.81
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
165 TestMultiControlPlane/serial/CopyFile 23.68
166 TestMultiControlPlane/serial/StopSecondaryNode 11.77
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
168 TestMultiControlPlane/serial/RestartSecondaryNode 21.79
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.34
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 214.13
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.81
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
173 TestMultiControlPlane/serial/StopCluster 33.11
174 TestMultiControlPlane/serial/RestartCluster 82.25
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
176 TestMultiControlPlane/serial/AddSecondaryNode 33.63
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
180 TestImageBuild/serial/Setup 20.89
181 TestImageBuild/serial/NormalBuild 1.79
182 TestImageBuild/serial/BuildWithBuildArg 0.94
183 TestImageBuild/serial/BuildWithDockerIgnore 0.79
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.77
188 TestJSONOutput/start/Command 74.58
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.56
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.6
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.69
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.77
213 TestKicCustomNetwork/create_custom_network 22.45
214 TestKicCustomNetwork/use_default_bridge_network 22.67
215 TestKicExistingNetwork 22.49
216 TestKicCustomSubnet 22.22
217 TestKicStaticIP 22.75
218 TestMainNoArgs 0.09
219 TestMinikubeProfile 47.28
222 TestMountStart/serial/StartWithMountFirst 7.03
223 TestMountStart/serial/VerifyMountFirst 0.38
224 TestMountStart/serial/StartWithMountSecond 7.33
225 TestMountStart/serial/VerifyMountSecond 0.38
226 TestMountStart/serial/DeleteFirst 2.04
227 TestMountStart/serial/VerifyMountPostDelete 0.38
228 TestMountStart/serial/Stop 1.55
229 TestMountStart/serial/RestartStopped 8.24
249 TestPreload 107.95
270 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 14.27
271 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.52
x
+
TestDownloadOnly/v1.20.0/json-events (24.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-287000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-287000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (24.114915511s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-287000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-287000: exit status 85 (321.396584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-287000 | jenkins | v1.33.0 | 25 Apr 24 11:30 PDT |          |
	|         | -p download-only-287000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 11:30:50
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 11:30:50.972278    9674 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:30:50.972540    9674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:30:50.972545    9674 out.go:304] Setting ErrFile to fd 2...
	I0425 11:30:50.972549    9674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:30:50.972717    9674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	W0425 11:30:50.972816    9674 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18757-9222/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18757-9222/.minikube/config/config.json: no such file or directory
	I0425 11:30:50.974614    9674 out.go:298] Setting JSON to true
	I0425 11:30:50.996913    9674 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7221,"bootTime":1714062629,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 11:30:50.997002    9674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 11:30:51.019227    9674 out.go:97] [download-only-287000] minikube v1.33.0 on Darwin 14.4.1
	I0425 11:30:51.040461    9674 out.go:169] MINIKUBE_LOCATION=18757
	I0425 11:30:51.019498    9674 notify.go:220] Checking for updates...
	W0425 11:30:51.019493    9674 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball: no such file or directory
	I0425 11:30:51.083566    9674 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 11:30:51.104516    9674 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 11:30:51.125663    9674 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 11:30:51.146743    9674 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	W0425 11:30:51.188707    9674 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0425 11:30:51.189219    9674 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 11:30:51.244834    9674 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 11:30:51.244985    9674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 11:30:51.353886    9674 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-25 18:30:51.342195328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 11:30:51.375472    9674 out.go:97] Using the docker driver based on user configuration
	I0425 11:30:51.375524    9674 start.go:297] selected driver: docker
	I0425 11:30:51.375546    9674 start.go:901] validating driver "docker" against <nil>
	I0425 11:30:51.375752    9674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 11:30:51.490313    9674 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-25 18:30:51.479627133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 11:30:51.490505    9674 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 11:30:51.493412    9674 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0425 11:30:51.493553    9674 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 11:30:51.514508    9674 out.go:169] Using Docker Desktop driver with root privileges
	I0425 11:30:51.535591    9674 cni.go:84] Creating CNI manager for ""
	I0425 11:30:51.535635    9674 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0425 11:30:51.535765    9674 start.go:340] cluster config:
	{Name:download-only-287000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-287000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 11:30:51.557446    9674 out.go:97] Starting "download-only-287000" primary control-plane node in "download-only-287000" cluster
	I0425 11:30:51.557488    9674 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 11:30:51.578266    9674 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0425 11:30:51.578398    9674 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0425 11:30:51.578468    9674 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 11:30:51.629577    9674 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0425 11:30:51.629823    9674 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0425 11:30:51.629959    9674 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0425 11:30:51.633390    9674 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0425 11:30:51.633411    9674 cache.go:56] Caching tarball of preloaded images
	I0425 11:30:51.633584    9674 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0425 11:30:51.655399    9674 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0425 11:30:51.655429    9674 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:30:51.749647    9674 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0425 11:30:59.063755    9674 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:30:59.063944    9674 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:30:59.608844    9674 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0425 11:30:59.609090    9674 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/download-only-287000/config.json ...
	I0425 11:30:59.609113    9674 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/download-only-287000/config.json: {Name:mk29ae4e3336ef536baf075a543617a95ac5a311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 11:30:59.609456    9674 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0425 11:30:59.609802    9674 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-287000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-287000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-287000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (7.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-234000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-234000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker : (7.737631791s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (7.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-234000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-234000: exit status 85 (302.494262ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-287000 | jenkins | v1.33.0 | 25 Apr 24 11:30 PDT |                     |
	|         | -p download-only-287000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 25 Apr 24 11:31 PDT | 25 Apr 24 11:31 PDT |
	| delete  | -p download-only-287000        | download-only-287000 | jenkins | v1.33.0 | 25 Apr 24 11:31 PDT | 25 Apr 24 11:31 PDT |
	| start   | -o=json --download-only        | download-only-234000 | jenkins | v1.33.0 | 25 Apr 24 11:31 PDT |                     |
	|         | -p download-only-234000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 11:31:16
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 11:31:16.405656    9751 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:31:16.405942    9751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:31:16.405948    9751 out.go:304] Setting ErrFile to fd 2...
	I0425 11:31:16.405951    9751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:31:16.406128    9751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 11:31:16.407540    9751 out.go:298] Setting JSON to true
	I0425 11:31:16.429465    9751 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7247,"bootTime":1714062629,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 11:31:16.429552    9751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 11:31:16.451423    9751 out.go:97] [download-only-234000] minikube v1.33.0 on Darwin 14.4.1
	I0425 11:31:16.473431    9751 out.go:169] MINIKUBE_LOCATION=18757
	I0425 11:31:16.451630    9751 notify.go:220] Checking for updates...
	I0425 11:31:16.516044    9751 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 11:31:16.537356    9751 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 11:31:16.558482    9751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 11:31:16.579392    9751 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	W0425 11:31:16.621437    9751 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0425 11:31:16.621949    9751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 11:31:16.676821    9751 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 11:31:16.676967    9751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 11:31:16.783439    9751 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-25 18:31:16.772767259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 11:31:16.804542    9751 out.go:97] Using the docker driver based on user configuration
	I0425 11:31:16.804591    9751 start.go:297] selected driver: docker
	I0425 11:31:16.804613    9751 start.go:901] validating driver "docker" against <nil>
	I0425 11:31:16.804801    9751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 11:31:16.908491    9751 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-25 18:31:16.89878874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-
g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 11:31:16.908673    9751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 11:31:16.911580    9751 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0425 11:31:16.911718    9751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 11:31:16.932858    9751 out.go:169] Using Docker Desktop driver with root privileges
	I0425 11:31:16.954854    9751 cni.go:84] Creating CNI manager for ""
	I0425 11:31:16.954898    9751 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0425 11:31:16.954926    9751 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 11:31:16.955081    9751 start.go:340] cluster config:
	{Name:download-only-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 11:31:16.976492    9751 out.go:97] Starting "download-only-234000" primary control-plane node in "download-only-234000" cluster
	I0425 11:31:16.976536    9751 cache.go:121] Beginning downloading kic base image for docker with docker
	I0425 11:31:16.997637    9751 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0425 11:31:16.997697    9751 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 11:31:16.997802    9751 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0425 11:31:17.047141    9751 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0425 11:31:17.047312    9751 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0425 11:31:17.047330    9751 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0425 11:31:17.047336    9751 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0425 11:31:17.047344    9751 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0425 11:31:17.062407    9751 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 11:31:17.062453    9751 cache.go:56] Caching tarball of preloaded images
	I0425 11:31:17.062760    9751 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 11:31:17.084787    9751 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0425 11:31:17.084835    9751 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:31:17.173551    9751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> /Users/jenkins/minikube-integration/18757-9222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-234000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-234000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-234000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.85s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-905000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-905000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-905000
--- PASS: TestDownloadOnlyKic (1.85s)

                                                
                                    
x
+
TestBinaryMirror (1.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-557000 --alsologtostderr --binary-mirror http://127.0.0.1:57673 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-557000
--- PASS: TestBinaryMirror (1.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.15s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-396000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-396000: exit status 85 (151.947779ms)

                                                
                                                
-- stdout --
	* Profile "addons-396000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-396000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.15s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-396000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-396000: exit status 85 (172.644526ms)

                                                
                                                
-- stdout --
	* Profile "addons-396000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-396000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (153.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-396000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-396000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m33.087427927s)
--- PASS: TestAddons/Setup (153.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-shpcg" [c1738486-a27a-44f3-b246-be816b8933c8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002998741s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-396000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-396000: (5.761150173s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.74707ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-ll8wm" [16f8ce5f-a154-4dca-b926-4576a094ea11] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005081645s
addons_test.go:415: (dbg) Run:  kubectl --context addons-396000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-396000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 1.95068ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-r7bbh" [5efe7f58-87c3-4126-84bb-61a558412bcf] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004125845s
addons_test.go:473: (dbg) Run:  kubectl --context addons-396000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-396000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.6949361s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-396000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (88.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 14.797421ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-396000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-396000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e0323fcb-d025-4764-af56-0bf07deabd6e] Pending
helpers_test.go:344: "task-pv-pod" [e0323fcb-d025-4764-af56-0bf07deabd6e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e0323fcb-d025-4764-af56-0bf07deabd6e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.006069697s
addons_test.go:584: (dbg) Run:  kubectl --context addons-396000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-396000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-396000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-396000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-396000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-396000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-396000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c610d75f-bc5c-4d42-9dd3-d07696df2fc2] Pending
helpers_test.go:344: "task-pv-pod-restore" [c610d75f-bc5c-4d42-9dd3-d07696df2fc2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c610d75f-bc5c-4d42-9dd3-d07696df2fc2] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004632281s
addons_test.go:626: (dbg) Run:  kubectl --context addons-396000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-396000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-396000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-396000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-396000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.684811546s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-396000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-darwin-amd64 -p addons-396000 addons disable volumesnapshots --alsologtostderr -v=1: (1.317566559s)
--- PASS: TestAddons/parallel/CSI (88.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-396000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-396000 --alsologtostderr -v=1: (1.206415665s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-m9nxs" [4632e438-5c39-4ff3-81ec-953bdc1967e3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-m9nxs" [4632e438-5c39-4ff3-81ec-953bdc1967e3] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004803613s
--- PASS: TestAddons/parallel/Headlamp (12.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-ttkkb" [61390330-8cbf-4caf-9859-f10b366bd0cb] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004572758s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-396000
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-396000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-396000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3c710574-a464-4f6d-b673-fd81102b83b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3c710574-a464-4f6d-b673-fd81102b83b1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3c710574-a464-4f6d-b673-fd81102b83b1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003345433s
addons_test.go:891: (dbg) Run:  kubectl --context addons-396000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-396000 ssh "cat /opt/local-path-provisioner/pvc-ffb49582-ee45-4cd8-992a-4f3e052c5516_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-396000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-396000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-396000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-396000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.139105096s)
--- PASS: TestAddons/parallel/LocalPath (54.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jhxxq" [b275831f-c3ee-479c-bfd5-cb7dba51caa3] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00440259s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-396000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-rj6s8" [573b210a-61f9-4551-ae28-43eee9b13d7e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003653741s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-396000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-396000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.68s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-396000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-396000: (10.951166936s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-396000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-396000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-396000
--- PASS: TestAddons/StoppedEnableDisable (11.68s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.11s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.11s)

                                                
                                    
x
+
TestErrorSpam/setup (20.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-018000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-018000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 --driver=docker : (20.043435268s)
--- PASS: TestErrorSpam/setup (20.04s)

                                                
                                    
x
+
TestErrorSpam/start (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 start --dry-run
--- PASS: TestErrorSpam/start (2.09s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (11.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 stop: (10.749325015s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-018000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-018000 stop
--- PASS: TestErrorSpam/stop (11.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18757-9222/.minikube/files/etc/test/nested/copy/9672/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-872000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-872000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m14.112339586s)
--- PASS: TestFunctional/serial/StartWithProxy (74.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-872000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-872000 --alsologtostderr -v=8: (35.942233491s)
functional_test.go:659: soft start took 35.942687832s for "functional-872000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-872000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 cache add registry.k8s.io/pause:3.1: (1.244161876s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cache add registry.k8s.io/pause:3.3
E0425 11:39:02.866504    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 11:39:02.872723    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 11:39:02.883853    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 11:39:02.904101    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 11:39:02.944582    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 cache add registry.k8s.io/pause:3.3: (1.202117933s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cache add registry.k8s.io/pause:latest
E0425 11:39:03.025689    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 11:39:03.186614    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 11:39:03.506930    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 cache add registry.k8s.io/pause:latest: (1.06035708s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local289523847/001
E0425 11:39:04.149087    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cache add minikube-local-cache-test:functional-872000
E0425 11:39:05.429751    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 cache add minikube-local-cache-test:functional-872000: (1.052051906s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cache delete minikube-local-cache-test:functional-872000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-872000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (374.079532ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0425 11:39:07.990235    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 kubectl -- --context functional-872000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-872000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-872000 get pods: (1.446270377s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.45s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-872000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0425 11:39:13.110818    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 11:39:23.352310    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
E0425 11:39:43.833192    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-872000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.066369574s)
functional_test.go:757: restart took 40.066531604s for "functional-872000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-872000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 logs: (3.013367104s)
--- PASS: TestFunctional/serial/LogsCmd (3.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3214860856/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3214860856/001/logs.txt: (2.905784653s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-872000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-872000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-872000: exit status 115 (577.875864ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30712 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-872000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 config get cpus: exit status 14 (64.897249ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 config get cpus: exit status 14 (64.227142ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-872000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-872000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 12603: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-872000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-872000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (849.927627ms)

                                                
                                                
-- stdout --
	* [functional-872000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 11:41:31.149921   12496 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:41:31.150198   12496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:41:31.150204   12496 out.go:304] Setting ErrFile to fd 2...
	I0425 11:41:31.150208   12496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:41:31.150393   12496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 11:41:31.152218   12496 out.go:298] Setting JSON to false
	I0425 11:41:31.175627   12496 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7862,"bootTime":1714062629,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 11:41:31.175726   12496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 11:41:31.199450   12496 out.go:177] * [functional-872000] minikube v1.33.0 on Darwin 14.4.1
	I0425 11:41:31.262134   12496 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 11:41:31.240290   12496 notify.go:220] Checking for updates...
	I0425 11:41:31.305018   12496 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 11:41:31.326091   12496 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 11:41:31.368055   12496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 11:41:31.410158   12496 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 11:41:31.468109   12496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 11:41:31.507977   12496 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 11:41:31.508701   12496 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 11:41:31.585698   12496 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 11:41:31.585875   12496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 11:41:31.700337   12496 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-25 18:41:31.689423019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 11:41:31.741998   12496 out.go:177] * Using the docker driver based on existing profile
	I0425 11:41:31.779029   12496 start.go:297] selected driver: docker
	I0425 11:41:31.779049   12496 start.go:901] validating driver "docker" against &{Name:functional-872000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-872000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 11:41:31.779125   12496 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 11:41:31.803048   12496 out.go:177] 
	W0425 11:41:31.824128   12496 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0425 11:41:31.861105   12496 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-872000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-872000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-872000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (706.597825ms)

                                                
                                                
-- stdout --
	* [functional-872000] minikube v1.33.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 11:41:32.973807   12555 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:41:32.973966   12555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:41:32.973971   12555 out.go:304] Setting ErrFile to fd 2...
	I0425 11:41:32.973975   12555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:41:32.974173   12555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 11:41:32.975861   12555 out.go:298] Setting JSON to false
	I0425 11:41:32.998929   12555 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7863,"bootTime":1714062629,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0425 11:41:32.999028   12555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 11:41:33.021651   12555 out.go:177] * [functional-872000] minikube v1.33.0 sur Darwin 14.4.1
	I0425 11:41:33.063897   12555 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 11:41:33.063938   12555 notify.go:220] Checking for updates...
	I0425 11:41:33.085034   12555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
	I0425 11:41:33.106326   12555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 11:41:33.126785   12555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 11:41:33.169005   12555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube
	I0425 11:41:33.211107   12555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 11:41:33.249953   12555 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 11:41:33.250766   12555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 11:41:33.306672   12555 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0425 11:41:33.306823   12555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0425 11:41:33.417114   12555 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-25 18:41:33.405872482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211088384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0425 11:41:33.438787   12555 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0425 11:41:33.480641   12555 start.go:297] selected driver: docker
	I0425 11:41:33.480675   12555 start.go:901] validating driver "docker" against &{Name:functional-872000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-872000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 11:41:33.480802   12555 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 11:41:33.506764   12555 out.go:177] 
	W0425 11:41:33.544959   12555 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0425 11:41:33.566761   12555 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [27565eb1-c090-4f79-aced-b4b217da79c8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005291045s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-872000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-872000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-872000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-872000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2a6f137f-2410-4a8a-959f-75520ada42a5] Pending
helpers_test.go:344: "sp-pod" [2a6f137f-2410-4a8a-959f-75520ada42a5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2a6f137f-2410-4a8a-959f-75520ada42a5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004929297s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-872000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-872000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-872000 delete -f testdata/storage-provisioner/pod.yaml: (1.065195994s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-872000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [15e358b9-4d22-4793-9871-60b01cf8267b] Pending
helpers_test.go:344: "sp-pod" [15e358b9-4d22-4793-9871-60b01cf8267b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [15e358b9-4d22-4793-9871-60b01cf8267b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00586702s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-872000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh -n functional-872000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cp functional-872000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd985737932/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh -n functional-872000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh -n functional-872000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-872000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-4nkhm" [12cc40da-19de-43dc-805e-b929226e05fc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-4nkhm" [12cc40da-19de-43dc-805e-b929226e05fc] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004267394s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-872000 exec mysql-64454c8b5c-4nkhm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-872000 exec mysql-64454c8b5c-4nkhm -- mysql -ppassword -e "show databases;": exit status 1 (121.51533ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-872000 exec mysql-64454c8b5c-4nkhm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-872000 exec mysql-64454c8b5c-4nkhm -- mysql -ppassword -e "show databases;": exit status 1 (126.217233ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-872000 exec mysql-64454c8b5c-4nkhm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-872000 exec mysql-64454c8b5c-4nkhm -- mysql -ppassword -e "show databases;": exit status 1 (112.686721ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-872000 exec mysql-64454c8b5c-4nkhm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/9672/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo cat /etc/test/nested/copy/9672/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/9672.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo cat /etc/ssl/certs/9672.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/9672.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo cat /usr/share/ca-certificates/9672.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/96722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo cat /etc/ssl/certs/96722.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/96722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo cat /usr/share/ca-certificates/96722.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-872000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 ssh "sudo systemctl is-active crio": exit status 1 (476.148927ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-872000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-872000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-872000
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-872000 image ls --format short --alsologtostderr:
I0425 11:41:45.184257   12847 out.go:291] Setting OutFile to fd 1 ...
I0425 11:41:45.184555   12847 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:45.184561   12847 out.go:304] Setting ErrFile to fd 2...
I0425 11:41:45.184565   12847 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:45.184761   12847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
I0425 11:41:45.185339   12847 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:45.185429   12847 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:45.185912   12847 cli_runner.go:164] Run: docker container inspect functional-872000 --format={{.State.Status}}
I0425 11:41:45.248216   12847 ssh_runner.go:195] Run: systemctl --version
I0425 11:41:45.248297   12847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-872000
I0425 11:41:45.309766   12847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58428 SSHKeyPath:/Users/jenkins/minikube-integration/18757-9222/.minikube/machines/functional-872000/id_rsa Username:docker}
I0425 11:41:45.414449   12847 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-872000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-872000 | 04f54f6f1c91f | 30B    |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-872000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-872000 image ls --format table --alsologtostderr:
I0425 11:41:45.952600   12865 out.go:291] Setting OutFile to fd 1 ...
I0425 11:41:45.953047   12865 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:45.953056   12865 out.go:304] Setting ErrFile to fd 2...
I0425 11:41:45.953061   12865 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:45.953322   12865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
I0425 11:41:45.955416   12865 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:45.955580   12865 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:45.956107   12865 cli_runner.go:164] Run: docker container inspect functional-872000 --format={{.State.Status}}
I0425 11:41:46.023252   12865 ssh_runner.go:195] Run: systemctl --version
I0425 11:41:46.023341   12865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-872000
I0425 11:41:46.074855   12865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58428 SSHKeyPath:/Users/jenkins/minikube-integration/18757-9222/.minikube/machines/functional-872000/id_rsa Username:docker}
I0425 11:41:46.157300   12865 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-872000 image ls --format json --alsologtostderr:
[{"id":"04f54f6f1c91f6e1bd73b30053a0ff13c2b8c070a8a0027c71f917c462bc1e25","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-872000"],"size":"30"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"4380000
0"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDi
gests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-872000"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisio
ner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-872000 image ls --format json --alsologtostderr:
I0425 11:41:45.866798   12862 out.go:291] Setting OutFile to fd 1 ...
I0425 11:41:45.874077   12862 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:45.874091   12862 out.go:304] Setting ErrFile to fd 2...
I0425 11:41:45.874097   12862 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:45.874306   12862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
I0425 11:41:45.874915   12862 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:45.875010   12862 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:45.875388   12862 cli_runner.go:164] Run: docker container inspect functional-872000 --format={{.State.Status}}
I0425 11:41:45.938123   12862 ssh_runner.go:195] Run: systemctl --version
I0425 11:41:45.938249   12862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-872000
I0425 11:41:46.004530   12862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58428 SSHKeyPath:/Users/jenkins/minikube-integration/18757-9222/.minikube/machines/functional-872000/id_rsa Username:docker}
I0425 11:41:46.093215   12862 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls --format yaml --alsologtostderr
2024/04/25 11:41:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-872000 image ls --format yaml --alsologtostderr:
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 04f54f6f1c91f6e1bd73b30053a0ff13c2b8c070a8a0027c71f917c462bc1e25
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-872000
size: "30"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-872000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-872000 image ls --format yaml --alsologtostderr:
I0425 11:41:45.556097   12853 out.go:291] Setting OutFile to fd 1 ...
I0425 11:41:45.567022   12853 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:45.567032   12853 out.go:304] Setting ErrFile to fd 2...
I0425 11:41:45.567038   12853 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:45.567370   12853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
I0425 11:41:45.588061   12853 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:45.588197   12853 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:45.588857   12853 cli_runner.go:164] Run: docker container inspect functional-872000 --format={{.State.Status}}
I0425 11:41:45.651698   12853 ssh_runner.go:195] Run: systemctl --version
I0425 11:41:45.651775   12853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-872000
I0425 11:41:45.712786   12853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58428 SSHKeyPath:/Users/jenkins/minikube-integration/18757-9222/.minikube/machines/functional-872000/id_rsa Username:docker}
I0425 11:41:45.820597   12853 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 ssh pgrep buildkitd: exit status 1 (351.457145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image build -t localhost/my-image:functional-872000 testdata/build --alsologtostderr
E0425 11:41:46.715464    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 image build -t localhost/my-image:functional-872000 testdata/build --alsologtostderr: (1.94977147s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-872000 image build -t localhost/my-image:functional-872000 testdata/build --alsologtostderr:
I0425 11:41:46.553792   12886 out.go:291] Setting OutFile to fd 1 ...
I0425 11:41:46.554656   12886 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:46.554664   12886 out.go:304] Setting ErrFile to fd 2...
I0425 11:41:46.554668   12886 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:41:46.554862   12886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
I0425 11:41:46.556197   12886 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:46.556870   12886 config.go:182] Loaded profile config "functional-872000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:41:46.557257   12886 cli_runner.go:164] Run: docker container inspect functional-872000 --format={{.State.Status}}
I0425 11:41:46.606513   12886 ssh_runner.go:195] Run: systemctl --version
I0425 11:41:46.606591   12886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-872000
I0425 11:41:46.655220   12886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58428 SSHKeyPath:/Users/jenkins/minikube-integration/18757-9222/.minikube/machines/functional-872000/id_rsa Username:docker}
I0425 11:41:46.738108   12886 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.686976233.tar
I0425 11:41:46.738186   12886 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0425 11:41:46.746724   12886 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.686976233.tar
I0425 11:41:46.750550   12886 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.686976233.tar: stat -c "%s %y" /var/lib/minikube/build/build.686976233.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.686976233.tar': No such file or directory
I0425 11:41:46.750593   12886 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.686976233.tar --> /var/lib/minikube/build/build.686976233.tar (3072 bytes)
I0425 11:41:46.771840   12886 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.686976233
I0425 11:41:46.780149   12886 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.686976233 -xf /var/lib/minikube/build/build.686976233.tar
I0425 11:41:46.788658   12886 docker.go:360] Building image: /var/lib/minikube/build/build.686976233
I0425 11:41:46.788730   12886 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-872000 /var/lib/minikube/build/build.686976233
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:dda9f05ead7848ea7fd1f8cc027ec25b4e7cac29446d149a671d604455fc8017 done
#8 naming to localhost/my-image:functional-872000 done
#8 DONE 0.0s
I0425 11:41:48.382079   12886 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-872000 /var/lib/minikube/build/build.686976233: (1.5933072s)
I0425 11:41:48.382147   12886 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.686976233
I0425 11:41:48.390407   12886 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.686976233.tar
I0425 11:41:48.398425   12886 build_images.go:217] Built localhost/my-image:functional-872000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.686976233.tar
I0425 11:41:48.398453   12886 build_images.go:133] succeeded building to: functional-872000
I0425 11:41:48.398458   12886 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.467316469s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-872000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-872000 docker-env) && out/minikube-darwin-amd64 status -p functional-872000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-872000 docker-env) && out/minikube-darwin-amd64 status -p functional-872000": (1.27232229s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-872000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image load --daemon gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 image load --daemon gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr: (3.921052005s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image load --daemon gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 image load --daemon gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr: (2.045279028s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.213232061s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-872000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image load --daemon gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 image load --daemon gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr: (3.802917337s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image save gcr.io/google-containers/addon-resizer:functional-872000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 image save gcr.io/google-containers/addon-resizer:functional-872000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.715132717s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image rm gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.055733576s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-872000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 image save --daemon gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-872000 image save --daemon gcr.io/google-containers/addon-resizer:functional-872000 --alsologtostderr: (1.467668453s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-872000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-872000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-872000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-qftb6" [d13915df-38b8-40ee-bb94-914c43856287] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0425 11:40:24.793932    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-6d85cfcfd8-qftb6" [d13915df-38b8-40ee-bb94-914c43856287] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.003519201s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-872000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-872000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-872000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12184: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-872000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-872000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-872000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6b1fc17e-53c2-4e49-9b06-6dcf84c7bfe0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6b1fc17e-53c2-4e49-9b06-6dcf84c7bfe0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.005836078s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 service list -o json
functional_test.go:1490: Took "676.68231ms" to run "out/minikube-darwin-amd64 -p functional-872000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 service --namespace=default --https --url hello-node: signal: killed (15.00387659s)

                                                
                                                
-- stdout --
	https://127.0.0.1:58690

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:58690
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-872000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-872000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12228: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 service hello-node --url --format={{.IP}}: signal: killed (15.00550054s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 service hello-node --url: signal: killed (15.004609047s)

                                                
                                                
-- stdout --
	http://127.0.0.1:58749

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:58749
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1841829805/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714070489236088000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1841829805/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714070489236088000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1841829805/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714070489236088000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1841829805/001/test-1714070489236088000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.418018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 25 18:41 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 25 18:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 25 18:41 test-1714070489236088000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh cat /mount-9p/test-1714070489236088000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-872000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ee7c495f-e938-41a2-8fc4-de69addfc115] Pending
helpers_test.go:344: "busybox-mount" [ee7c495f-e938-41a2-8fc4-de69addfc115] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ee7c495f-e938-41a2-8fc4-de69addfc115] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ee7c495f-e938-41a2-8fc4-de69addfc115] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005364813s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-872000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1841829805/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "512.603099ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "91.56649ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "475.124934ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "87.088727ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1816251887/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (457.352578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1816251887/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 ssh "sudo umount -f /mount-9p": exit status 1 (390.483872ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-872000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1816251887/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1142961268/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1142961268/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1142961268/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T" /mount1: exit status 1 (595.812265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-872000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-872000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1142961268/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1142961268/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-872000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1142961268/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.83s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-872000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-872000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-872000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (97.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-304000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-304000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m36.396466294s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr: (1.061735007s)
--- PASS: TestMultiControlPlane/serial/StartCluster (97.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-304000 -- rollout status deployment/busybox: (2.815023789s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-6vczx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-b6bp7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-r9g5d -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-6vczx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-b6bp7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-r9g5d -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-6vczx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-b6bp7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-r9g5d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-6vczx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-6vczx -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-b6bp7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-b6bp7 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-r9g5d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-304000 -- exec busybox-fc5497c4f-r9g5d -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (17.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-304000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-304000 -v=7 --alsologtostderr: (16.519275107s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr: (1.293321486s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (17.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-304000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.109042281s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (23.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 status --output json -v=7 --alsologtostderr: (1.294877468s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp testdata/cp-test.txt ha-304000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile202792374/001/cp-test_ha-304000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000:/home/docker/cp-test.txt ha-304000-m02:/home/docker/cp-test_ha-304000_ha-304000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m02 "sudo cat /home/docker/cp-test_ha-304000_ha-304000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000:/home/docker/cp-test.txt ha-304000-m03:/home/docker/cp-test_ha-304000_ha-304000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m03 "sudo cat /home/docker/cp-test_ha-304000_ha-304000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000:/home/docker/cp-test.txt ha-304000-m04:/home/docker/cp-test_ha-304000_ha-304000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m04 "sudo cat /home/docker/cp-test_ha-304000_ha-304000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp testdata/cp-test.txt ha-304000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m02 "sudo cat /home/docker/cp-test.txt"
E0425 11:44:02.883539    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile202792374/001/cp-test_ha-304000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m02:/home/docker/cp-test.txt ha-304000:/home/docker/cp-test_ha-304000-m02_ha-304000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000 "sudo cat /home/docker/cp-test_ha-304000-m02_ha-304000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m02:/home/docker/cp-test.txt ha-304000-m03:/home/docker/cp-test_ha-304000-m02_ha-304000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m03 "sudo cat /home/docker/cp-test_ha-304000-m02_ha-304000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m02:/home/docker/cp-test.txt ha-304000-m04:/home/docker/cp-test_ha-304000-m02_ha-304000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m04 "sudo cat /home/docker/cp-test_ha-304000-m02_ha-304000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp testdata/cp-test.txt ha-304000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile202792374/001/cp-test_ha-304000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m03:/home/docker/cp-test.txt ha-304000:/home/docker/cp-test_ha-304000-m03_ha-304000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000 "sudo cat /home/docker/cp-test_ha-304000-m03_ha-304000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m03:/home/docker/cp-test.txt ha-304000-m02:/home/docker/cp-test_ha-304000-m03_ha-304000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m02 "sudo cat /home/docker/cp-test_ha-304000-m03_ha-304000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m03:/home/docker/cp-test.txt ha-304000-m04:/home/docker/cp-test_ha-304000-m03_ha-304000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m04 "sudo cat /home/docker/cp-test_ha-304000-m03_ha-304000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp testdata/cp-test.txt ha-304000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile202792374/001/cp-test_ha-304000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m04:/home/docker/cp-test.txt ha-304000:/home/docker/cp-test_ha-304000-m04_ha-304000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000 "sudo cat /home/docker/cp-test_ha-304000-m04_ha-304000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m04:/home/docker/cp-test.txt ha-304000-m02:/home/docker/cp-test_ha-304000-m04_ha-304000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m02 "sudo cat /home/docker/cp-test_ha-304000-m04_ha-304000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 cp ha-304000-m04:/home/docker/cp-test.txt ha-304000-m03:/home/docker/cp-test_ha-304000-m04_ha-304000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 ssh -n ha-304000-m03 "sudo cat /home/docker/cp-test_ha-304000-m04_ha-304000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (23.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 node stop m02 -v=7 --alsologtostderr: (10.75443949s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr
E0425 11:44:30.572994    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr: exit status 7 (1.014286146s)

                                                
                                                
-- stdout --
	ha-304000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-304000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-304000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-304000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 11:44:29.749093   14368 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:44:29.749429   14368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:44:29.749436   14368 out.go:304] Setting ErrFile to fd 2...
	I0425 11:44:29.749439   14368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:44:29.749630   14368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 11:44:29.749828   14368 out.go:298] Setting JSON to false
	I0425 11:44:29.749850   14368 mustload.go:65] Loading cluster: ha-304000
	I0425 11:44:29.749894   14368 notify.go:220] Checking for updates...
	I0425 11:44:29.750242   14368 config.go:182] Loaded profile config "ha-304000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 11:44:29.750257   14368 status.go:255] checking status of ha-304000 ...
	I0425 11:44:29.750668   14368 cli_runner.go:164] Run: docker container inspect ha-304000 --format={{.State.Status}}
	I0425 11:44:29.801284   14368 status.go:330] ha-304000 host status = "Running" (err=<nil>)
	I0425 11:44:29.801320   14368 host.go:66] Checking if "ha-304000" exists ...
	I0425 11:44:29.801549   14368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-304000
	I0425 11:44:29.852699   14368 host.go:66] Checking if "ha-304000" exists ...
	I0425 11:44:29.852979   14368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 11:44:29.853047   14368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-304000
	I0425 11:44:29.902098   14368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58887 SSHKeyPath:/Users/jenkins/minikube-integration/18757-9222/.minikube/machines/ha-304000/id_rsa Username:docker}
	I0425 11:44:29.985598   14368 ssh_runner.go:195] Run: systemctl --version
	I0425 11:44:29.989880   14368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 11:44:30.000740   14368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-304000
	I0425 11:44:30.050313   14368 kubeconfig.go:125] found "ha-304000" server: "https://127.0.0.1:58891"
	I0425 11:44:30.050345   14368 api_server.go:166] Checking apiserver status ...
	I0425 11:44:30.050385   14368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 11:44:30.062369   14368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2143/cgroup
	W0425 11:44:30.073567   14368 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 11:44:30.073642   14368 ssh_runner.go:195] Run: ls
	I0425 11:44:30.077888   14368 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58891/healthz ...
	I0425 11:44:30.083082   14368 api_server.go:279] https://127.0.0.1:58891/healthz returned 200:
	ok
	I0425 11:44:30.083102   14368 status.go:422] ha-304000 apiserver status = Running (err=<nil>)
	I0425 11:44:30.083115   14368 status.go:257] ha-304000 status: &{Name:ha-304000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 11:44:30.083127   14368 status.go:255] checking status of ha-304000-m02 ...
	I0425 11:44:30.083377   14368 cli_runner.go:164] Run: docker container inspect ha-304000-m02 --format={{.State.Status}}
	I0425 11:44:30.132515   14368 status.go:330] ha-304000-m02 host status = "Stopped" (err=<nil>)
	I0425 11:44:30.132547   14368 status.go:343] host is not running, skipping remaining checks
	I0425 11:44:30.132557   14368 status.go:257] ha-304000-m02 status: &{Name:ha-304000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 11:44:30.132573   14368 status.go:255] checking status of ha-304000-m03 ...
	I0425 11:44:30.132851   14368 cli_runner.go:164] Run: docker container inspect ha-304000-m03 --format={{.State.Status}}
	I0425 11:44:30.182311   14368 status.go:330] ha-304000-m03 host status = "Running" (err=<nil>)
	I0425 11:44:30.182339   14368 host.go:66] Checking if "ha-304000-m03" exists ...
	I0425 11:44:30.182621   14368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-304000-m03
	I0425 11:44:30.231890   14368 host.go:66] Checking if "ha-304000-m03" exists ...
	I0425 11:44:30.232163   14368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 11:44:30.232217   14368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-304000-m03
	I0425 11:44:30.282043   14368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58991 SSHKeyPath:/Users/jenkins/minikube-integration/18757-9222/.minikube/machines/ha-304000-m03/id_rsa Username:docker}
	I0425 11:44:30.365665   14368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 11:44:30.376608   14368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-304000
	I0425 11:44:30.425995   14368 kubeconfig.go:125] found "ha-304000" server: "https://127.0.0.1:58891"
	I0425 11:44:30.426021   14368 api_server.go:166] Checking apiserver status ...
	I0425 11:44:30.426057   14368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 11:44:30.436512   14368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup
	W0425 11:44:30.445390   14368 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 11:44:30.445454   14368 ssh_runner.go:195] Run: ls
	I0425 11:44:30.449310   14368 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58891/healthz ...
	I0425 11:44:30.453131   14368 api_server.go:279] https://127.0.0.1:58891/healthz returned 200:
	ok
	I0425 11:44:30.453144   14368 status.go:422] ha-304000-m03 apiserver status = Running (err=<nil>)
	I0425 11:44:30.453159   14368 status.go:257] ha-304000-m03 status: &{Name:ha-304000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 11:44:30.453171   14368 status.go:255] checking status of ha-304000-m04 ...
	I0425 11:44:30.453414   14368 cli_runner.go:164] Run: docker container inspect ha-304000-m04 --format={{.State.Status}}
	I0425 11:44:30.505793   14368 status.go:330] ha-304000-m04 host status = "Running" (err=<nil>)
	I0425 11:44:30.505819   14368 host.go:66] Checking if "ha-304000-m04" exists ...
	I0425 11:44:30.506071   14368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-304000-m04
	I0425 11:44:30.555425   14368 host.go:66] Checking if "ha-304000-m04" exists ...
	I0425 11:44:30.555675   14368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 11:44:30.555721   14368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-304000-m04
	I0425 11:44:30.604682   14368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59118 SSHKeyPath:/Users/jenkins/minikube-integration/18757-9222/.minikube/machines/ha-304000-m04/id_rsa Username:docker}
	I0425 11:44:30.685896   14368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 11:44:30.696507   14368 status.go:257] ha-304000-m04 status: &{Name:ha-304000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 node start m02 -v=7 --alsologtostderr: (20.042973684s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr: (1.680973607s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.33550114s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (214.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-304000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-304000 -v=7 --alsologtostderr
E0425 11:45:08.940679    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:08.947079    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:08.957546    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:08.979701    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:09.020521    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:09.102813    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:09.263620    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:09.585065    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:10.226936    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:11.507806    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:14.068621    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:19.190001    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-304000 -v=7 --alsologtostderr: (34.19128434s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-304000 --wait=true -v=7 --alsologtostderr
E0425 11:45:29.430743    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:45:49.911960    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:46:30.873536    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
E0425 11:47:52.795087    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-304000 --wait=true -v=7 --alsologtostderr: (2m59.802830093s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-304000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (214.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 node delete m03 -v=7 --alsologtostderr: (9.722090459s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 stop -v=7 --alsologtostderr
E0425 11:49:02.886589    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 stop -v=7 --alsologtostderr: (32.900738909s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr: exit status 7 (212.901226ms)

                                                
                                                
-- stdout --
	ha-304000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-304000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-304000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 11:49:13.360638   15432 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:49:13.360939   15432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:49:13.360946   15432 out.go:304] Setting ErrFile to fd 2...
	I0425 11:49:13.360949   15432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:49:13.361127   15432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-9222/.minikube/bin
	I0425 11:49:13.361299   15432 out.go:298] Setting JSON to false
	I0425 11:49:13.361321   15432 mustload.go:65] Loading cluster: ha-304000
	I0425 11:49:13.361360   15432 notify.go:220] Checking for updates...
	I0425 11:49:13.361620   15432 config.go:182] Loaded profile config "ha-304000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 11:49:13.361634   15432 status.go:255] checking status of ha-304000 ...
	I0425 11:49:13.362790   15432 cli_runner.go:164] Run: docker container inspect ha-304000 --format={{.State.Status}}
	I0425 11:49:13.412214   15432 status.go:330] ha-304000 host status = "Stopped" (err=<nil>)
	I0425 11:49:13.412233   15432 status.go:343] host is not running, skipping remaining checks
	I0425 11:49:13.412240   15432 status.go:257] ha-304000 status: &{Name:ha-304000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 11:49:13.412257   15432 status.go:255] checking status of ha-304000-m02 ...
	I0425 11:49:13.412516   15432 cli_runner.go:164] Run: docker container inspect ha-304000-m02 --format={{.State.Status}}
	I0425 11:49:13.460651   15432 status.go:330] ha-304000-m02 host status = "Stopped" (err=<nil>)
	I0425 11:49:13.460682   15432 status.go:343] host is not running, skipping remaining checks
	I0425 11:49:13.460693   15432 status.go:257] ha-304000-m02 status: &{Name:ha-304000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 11:49:13.460712   15432 status.go:255] checking status of ha-304000-m04 ...
	I0425 11:49:13.461004   15432 cli_runner.go:164] Run: docker container inspect ha-304000-m04 --format={{.State.Status}}
	I0425 11:49:13.509106   15432 status.go:330] ha-304000-m04 host status = "Stopped" (err=<nil>)
	I0425 11:49:13.509157   15432 status.go:343] host is not running, skipping remaining checks
	I0425 11:49:13.509175   15432 status.go:257] ha-304000-m04 status: &{Name:ha-304000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-304000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0425 11:50:08.943257    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-304000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m21.142213372s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (33.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-304000 --control-plane -v=7 --alsologtostderr
E0425 11:50:36.636775    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-304000 --control-plane -v=7 --alsologtostderr: (32.311845574s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-304000 status -v=7 --alsologtostderr: (1.321381328s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (33.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.073040717s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-317000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-317000 --driver=docker : (20.892098922s)
--- PASS: TestImageBuild/serial/Setup (20.89s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-317000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-317000: (1.78496559s)
--- PASS: TestImageBuild/serial/NormalBuild (1.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-317000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-317000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-317000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-998000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-998000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m14.584124445s)
--- PASS: TestJSONOutput/start/Command (74.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-998000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-998000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-998000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-998000 --output=json --user=testUser: (10.687351017s)
--- PASS: TestJSONOutput/stop/Command (10.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-824000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-824000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (388.999365ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47de910a-dbd0-4db2-b76e-c716b0a43213","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-824000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"14606e82-96b2-4146-ad8b-878dbaf6f6ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18757"}}
	{"specversion":"1.0","id":"705d4df0-52d9-425b-9173-8b055e4fad05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig"}}
	{"specversion":"1.0","id":"68147762-340e-4c53-af0c-07b6c138fa14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"26ace975-b2f4-4ce2-824c-3d624a73c979","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6e4c9e9f-5905-41d1-8dc4-d9c64a9b534c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-9222/.minikube"}}
	{"specversion":"1.0","id":"0421b189-719b-4dee-99ca-43e3d2257e60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a21a1caa-3f9b-48f6-8eab-a6f3e77888f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-824000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-824000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-204000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-204000 --network=: (20.053102806s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-204000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-204000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-204000: (2.346163868s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.45s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-683000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-683000 --network=bridge: (20.407529997s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-683000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-683000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-683000: (2.212555449s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.67s)

                                                
                                    
x
+
TestKicExistingNetwork (22.49s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-042000 --network=existing-network
E0425 11:54:02.889316    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-042000 --network=existing-network: (19.89569394s)
helpers_test.go:175: Cleaning up "existing-network-042000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-042000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-042000: (2.214420817s)
--- PASS: TestKicExistingNetwork (22.49s)

                                                
                                    
x
+
TestKicCustomSubnet (22.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-638000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-638000 --subnet=192.168.60.0/24: (19.812183451s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-638000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-638000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-638000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-638000: (2.359956977s)
--- PASS: TestKicCustomSubnet (22.22s)

                                                
                                    
x
+
TestKicStaticIP (22.75s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-592000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-592000 --static-ip=192.168.200.200: (20.183094952s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-592000 ip
helpers_test.go:175: Cleaning up "static-ip-592000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-592000
E0425 11:55:08.946155    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/functional-872000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-592000: (2.324435002s)
--- PASS: TestKicStaticIP (22.75s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (47.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-266000 --driver=docker 
E0425 11:55:25.939418    9672 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-9222/.minikube/profiles/addons-396000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-266000 --driver=docker : (20.326602757s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-268000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-268000 --driver=docker : (20.3102983s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-266000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-268000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-268000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-268000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-268000: (2.427035382s)
helpers_test.go:175: Cleaning up "first-266000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-266000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-266000: (2.372321848s)
--- PASS: TestMinikubeProfile (47.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-625000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-625000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.034110556s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-625000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-636000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-636000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.329018738s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-636000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.04s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-625000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-625000 --alsologtostderr -v=5: (2.038541416s)
--- PASS: TestMountStart/serial/DeleteFirst (2.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-636000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-636000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-636000: (1.545582106s)
--- PASS: TestMountStart/serial/Stop (1.55s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.24s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-636000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-636000: (7.234851284s)
--- PASS: TestMountStart/serial/RestartStopped (8.24s)

                                                
                                    
x
+
TestPreload (107.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-608000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-608000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m9.106413778s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-608000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-608000 image pull gcr.io/k8s-minikube/busybox: (1.450611569s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-608000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-608000: (10.855375656s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-608000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-608000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (23.721690394s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-608000 image list
helpers_test.go:175: Cleaning up "test-preload-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-608000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-608000: (2.421011119s)
--- PASS: TestPreload (107.95s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (14.27s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18757
- KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1898302570/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1898302570/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1898302570/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1898302570/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (14.27s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18757
- KUBECONFIG=/Users/jenkins/minikube-integration/18757-9222/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1352904481/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1352904481/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1352904481/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1352904481/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.52s)

                                                
                                    

Test skip (17/208)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 13.905574ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vjfbq" [9c7e5008-f72c-4cdc-b649-db425a1fd3dd] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007195321s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k498m" [9cb6ff82-ba39-4928-84da-4a0eb134a6f0] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006892835s
addons_test.go:340: (dbg) Run:  kubectl --context addons-396000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-396000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-396000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.711818816s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.79s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-396000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-396000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-396000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0129ef0c-0277-4df3-8946-7976774e62f7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0129ef0c-0277-4df3-8946-7976774e62f7] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005449156s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-396000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.77s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-872000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-872000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-fzd55" [59d43cd6-d33b-4f58-978c-ae0d0d2c5cee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-fzd55" [59d43cd6-d33b-4f58-978c-ae0d0d2c5cee] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003601213s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard