Test Report: Docker_macOS 18998

                    
                      e8d3a518ce9b98b9e9fc9f8b62f75f3019a13e07:2024-07-03:35167
                    
                

Test fail (19/204)

x
+
TestOffline (753.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-610000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-610000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m32.712209794s)

                                                
                                                
-- stdout --
	* [offline-docker-610000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-610000" primary control-plane node in "offline-docker-610000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-610000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 17:13:40.950449    8716 out.go:291] Setting OutFile to fd 1 ...
	I0703 17:13:40.950638    8716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 17:13:40.950648    8716 out.go:304] Setting ErrFile to fd 2...
	I0703 17:13:40.950654    8716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 17:13:40.950846    8716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 17:13:40.952423    8716 out.go:298] Setting JSON to false
	I0703 17:13:40.976084    8716 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6189,"bootTime":1720045831,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 17:13:40.976201    8716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 17:13:40.997650    8716 out.go:177] * [offline-docker-610000] minikube v1.33.1 on Darwin 14.5
	I0703 17:13:41.018564    8716 notify.go:220] Checking for updates...
	I0703 17:13:41.018571    8716 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 17:13:41.039520    8716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 17:13:41.060366    8716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 17:13:41.081546    8716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 17:13:41.102599    8716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	I0703 17:13:41.123336    8716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 17:13:41.144788    8716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 17:13:41.170691    8716 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 17:13:41.170869    8716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 17:13:41.253557    8716 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:false NGoroutines:153 SystemTime:2024-07-04 00:13:41.24353269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 17:13:41.274563    8716 out.go:177] * Using the docker driver based on user configuration
	I0703 17:13:41.295395    8716 start.go:297] selected driver: docker
	I0703 17:13:41.295411    8716 start.go:901] validating driver "docker" against <nil>
	I0703 17:13:41.295420    8716 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 17:13:41.298284    8716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 17:13:41.432101    8716 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:false NGoroutines:153 SystemTime:2024-07-04 00:13:41.410151289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 17:13:41.432376    8716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 17:13:41.432649    8716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 17:13:41.453557    8716 out.go:177] * Using Docker Desktop driver with root privileges
	I0703 17:13:41.474362    8716 cni.go:84] Creating CNI manager for ""
	I0703 17:13:41.474386    8716 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0703 17:13:41.474392    8716 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 17:13:41.474460    8716 start.go:340] cluster config:
	{Name:offline-docker-610000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-610000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 17:13:41.495656    8716 out.go:177] * Starting "offline-docker-610000" primary control-plane node in "offline-docker-610000" cluster
	I0703 17:13:41.517814    8716 cache.go:121] Beginning downloading kic base image for docker with docker
	I0703 17:13:41.539437    8716 out.go:177] * Pulling base image v0.0.44-1719972989-19184 ...
	I0703 17:13:41.581791    8716 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 17:13:41.581844    8716 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 17:13:41.581864    8716 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0703 17:13:41.581889    8716 cache.go:56] Caching tarball of preloaded images
	I0703 17:13:41.582179    8716 preload.go:173] Found /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0703 17:13:41.582207    8716 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0703 17:13:41.583862    8716 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/offline-docker-610000/config.json ...
	I0703 17:13:41.583951    8716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/offline-docker-610000/config.json: {Name:mkac81b1ba24c73d57c8742a4f525924461634a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 17:13:41.647155    8716 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon, skipping pull
	I0703 17:13:41.647170    8716 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 exists in daemon, skipping load
	I0703 17:13:41.647191    8716 cache.go:194] Successfully downloaded all kic artifacts
	I0703 17:13:41.647230    8716 start.go:360] acquireMachinesLock for offline-docker-610000: {Name:mk7ccd9f7d8553eb1d4415733edb6e82f86ab119 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 17:13:41.667887    8716 start.go:364] duration metric: took 20.635137ms to acquireMachinesLock for "offline-docker-610000"
	I0703 17:13:41.668004    8716 start.go:93] Provisioning new machine with config: &{Name:offline-docker-610000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-610000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0703 17:13:41.668133    8716 start.go:125] createHost starting for "" (driver="docker")
	I0703 17:13:41.711790    8716 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0703 17:13:41.712043    8716 start.go:159] libmachine.API.Create for "offline-docker-610000" (driver="docker")
	I0703 17:13:41.712076    8716 client.go:168] LocalClient.Create starting
	I0703 17:13:41.712214    8716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 17:13:41.712284    8716 main.go:141] libmachine: Decoding PEM data...
	I0703 17:13:41.712316    8716 main.go:141] libmachine: Parsing certificate...
	I0703 17:13:41.712406    8716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 17:13:41.712451    8716 main.go:141] libmachine: Decoding PEM data...
	I0703 17:13:41.712459    8716 main.go:141] libmachine: Parsing certificate...
	I0703 17:13:41.712941    8716 cli_runner.go:164] Run: docker network inspect offline-docker-610000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 17:13:41.796586    8716 cli_runner.go:211] docker network inspect offline-docker-610000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 17:13:41.796748    8716 network_create.go:284] running [docker network inspect offline-docker-610000] to gather additional debugging logs...
	I0703 17:13:41.796818    8716 cli_runner.go:164] Run: docker network inspect offline-docker-610000
	W0703 17:13:41.821176    8716 cli_runner.go:211] docker network inspect offline-docker-610000 returned with exit code 1
	I0703 17:13:41.821217    8716 network_create.go:287] error running [docker network inspect offline-docker-610000]: docker network inspect offline-docker-610000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-610000 not found
	I0703 17:13:41.821228    8716 network_create.go:289] output of [docker network inspect offline-docker-610000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-610000 not found
	
	** /stderr **
	I0703 17:13:41.821383    8716 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 17:13:41.842720    8716 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:13:41.844083    8716 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:13:41.844451    8716 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00147ec10}
	I0703 17:13:41.844491    8716 network_create.go:124] attempt to create docker network offline-docker-610000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0703 17:13:41.844557    8716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-610000 offline-docker-610000
	I0703 17:13:41.901720    8716 network_create.go:108] docker network offline-docker-610000 192.168.67.0/24 created
	I0703 17:13:41.901778    8716 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-610000" container
	I0703 17:13:41.901871    8716 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 17:13:41.976752    8716 cli_runner.go:164] Run: docker volume create offline-docker-610000 --label name.minikube.sigs.k8s.io=offline-docker-610000 --label created_by.minikube.sigs.k8s.io=true
	I0703 17:13:42.002390    8716 oci.go:103] Successfully created a docker volume offline-docker-610000
	I0703 17:13:42.002497    8716 cli_runner.go:164] Run: docker run --rm --name offline-docker-610000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-610000 --entrypoint /usr/bin/test -v offline-docker-610000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 17:13:42.354784    8716 oci.go:107] Successfully prepared a docker volume offline-docker-610000
	I0703 17:13:42.354825    8716 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 17:13:42.354841    8716 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 17:13:42.354983    8716 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-610000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 17:19:41.716583    8716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 17:19:41.716712    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:41.738591    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:19:41.738717    8716 retry.go:31] will retry after 149.226685ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:41.888221    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:41.907877    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:19:41.907972    8716 retry.go:31] will retry after 233.515039ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:42.143948    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:42.166526    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:19:42.166637    8716 retry.go:31] will retry after 450.772136ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:42.618126    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:42.639897    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:19:42.639990    8716 retry.go:31] will retry after 967.148992ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:43.607608    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:43.628846    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	W0703 17:19:43.628954    8716 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	
	W0703 17:19:43.628978    8716 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:43.629032    8716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 17:19:43.629085    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:43.649121    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:19:43.649219    8716 retry.go:31] will retry after 125.584441ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:43.777239    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:43.797724    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:19:43.797819    8716 retry.go:31] will retry after 201.750388ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:44.001578    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:44.023659    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:19:44.023763    8716 retry.go:31] will retry after 517.50396ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:44.542625    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:44.564875    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:19:44.564974    8716 retry.go:31] will retry after 819.245077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:45.386656    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:19:45.408553    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	W0703 17:19:45.408663    8716 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	
	W0703 17:19:45.408683    8716 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:45.408692    8716 start.go:128] duration metric: took 6m3.738343408s to createHost
	I0703 17:19:45.408699    8716 start.go:83] releasing machines lock for "offline-docker-610000", held for 6m3.738590641s
	W0703 17:19:45.408712    8716 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0703 17:19:45.409186    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:45.428810    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:45.428873    8716 delete.go:82] Unable to get host status for offline-docker-610000, assuming it has already been deleted: state: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	W0703 17:19:45.428958    8716 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0703 17:19:45.428968    8716 start.go:728] Will try again in 5 seconds ...
	I0703 17:19:50.429178    8716 start.go:360] acquireMachinesLock for offline-docker-610000: {Name:mk7ccd9f7d8553eb1d4415733edb6e82f86ab119 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 17:19:50.430320    8716 start.go:364] duration metric: took 1.074258ms to acquireMachinesLock for "offline-docker-610000"
	I0703 17:19:50.430410    8716 start.go:96] Skipping create...Using existing machine configuration
	I0703 17:19:50.430425    8716 fix.go:54] fixHost starting: 
	I0703 17:19:50.430886    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:50.452900    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:50.452951    8716 fix.go:112] recreateIfNeeded on offline-docker-610000: state= err=unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:50.452969    8716 fix.go:117] machineExists: false. err=machine does not exist
	I0703 17:19:50.477129    8716 out.go:177] * docker "offline-docker-610000" container is missing, will recreate.
	I0703 17:19:50.519912    8716 delete.go:124] DEMOLISHING offline-docker-610000 ...
	I0703 17:19:50.520091    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:50.540267    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	W0703 17:19:50.540330    8716 stop.go:83] unable to get state: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:50.540348    8716 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:50.540737    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:50.559986    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:50.560040    8716 delete.go:82] Unable to get host status for offline-docker-610000, assuming it has already been deleted: state: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:50.560123    8716 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-610000
	W0703 17:19:50.579496    8716 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-610000 returned with exit code 1
	I0703 17:19:50.579532    8716 kic.go:371] could not find the container offline-docker-610000 to remove it. will try anyways
	I0703 17:19:50.579603    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:50.598965    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	W0703 17:19:50.599022    8716 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:50.599118    8716 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-610000 /bin/bash -c "sudo init 0"
	W0703 17:19:50.618551    8716 cli_runner.go:211] docker exec --privileged -t offline-docker-610000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0703 17:19:50.618585    8716 oci.go:650] error shutdown offline-docker-610000: docker exec --privileged -t offline-docker-610000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:51.618974    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:51.647233    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:51.647313    8716 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:51.647327    8716 oci.go:664] temporary error: container offline-docker-610000 status is  but expect it to be exited
	I0703 17:19:51.647358    8716 retry.go:31] will retry after 369.031924ms: couldn't verify container is exited. %v: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:52.018734    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:52.039711    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:52.039772    8716 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:52.039782    8716 oci.go:664] temporary error: container offline-docker-610000 status is  but expect it to be exited
	I0703 17:19:52.039804    8716 retry.go:31] will retry after 735.075058ms: couldn't verify container is exited. %v: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:52.775348    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:52.796073    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:52.796137    8716 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:52.796153    8716 oci.go:664] temporary error: container offline-docker-610000 status is  but expect it to be exited
	I0703 17:19:52.796176    8716 retry.go:31] will retry after 1.254589265s: couldn't verify container is exited. %v: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:54.052771    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:54.075186    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:54.075251    8716 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:54.075276    8716 oci.go:664] temporary error: container offline-docker-610000 status is  but expect it to be exited
	I0703 17:19:54.075304    8716 retry.go:31] will retry after 1.119510105s: couldn't verify container is exited. %v: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:55.196269    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:55.217835    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:55.217893    8716 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:55.217902    8716 oci.go:664] temporary error: container offline-docker-610000 status is  but expect it to be exited
	I0703 17:19:55.217929    8716 retry.go:31] will retry after 2.232855537s: couldn't verify container is exited. %v: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:57.453163    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:19:57.474457    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:19:57.474502    8716 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:19:57.474511    8716 oci.go:664] temporary error: container offline-docker-610000 status is  but expect it to be exited
	I0703 17:19:57.474538    8716 retry.go:31] will retry after 3.862324983s: couldn't verify container is exited. %v: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:20:01.339123    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:20:01.361417    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:20:01.361466    8716 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:20:01.361480    8716 oci.go:664] temporary error: container offline-docker-610000 status is  but expect it to be exited
	I0703 17:20:01.361514    8716 retry.go:31] will retry after 4.94631171s: couldn't verify container is exited. %v: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:20:06.308204    8716 cli_runner.go:164] Run: docker container inspect offline-docker-610000 --format={{.State.Status}}
	W0703 17:20:06.330463    8716 cli_runner.go:211] docker container inspect offline-docker-610000 --format={{.State.Status}} returned with exit code 1
	I0703 17:20:06.330517    8716 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:20:06.330526    8716 oci.go:664] temporary error: container offline-docker-610000 status is  but expect it to be exited
	I0703 17:20:06.330557    8716 oci.go:88] couldn't shut down offline-docker-610000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	 
	I0703 17:20:06.330627    8716 cli_runner.go:164] Run: docker rm -f -v offline-docker-610000
	I0703 17:20:06.351202    8716 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-610000
	W0703 17:20:06.370005    8716 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-610000 returned with exit code 1
	I0703 17:20:06.370134    8716 cli_runner.go:164] Run: docker network inspect offline-docker-610000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 17:20:06.390439    8716 cli_runner.go:164] Run: docker network rm offline-docker-610000
	I0703 17:20:06.471432    8716 fix.go:124] Sleeping 1 second for extra luck!
	I0703 17:20:07.472098    8716 start.go:125] createHost starting for "" (driver="docker")
	I0703 17:20:07.493990    8716 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0703 17:20:07.494164    8716 start.go:159] libmachine.API.Create for "offline-docker-610000" (driver="docker")
	I0703 17:20:07.494195    8716 client.go:168] LocalClient.Create starting
	I0703 17:20:07.494459    8716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 17:20:07.494569    8716 main.go:141] libmachine: Decoding PEM data...
	I0703 17:20:07.494595    8716 main.go:141] libmachine: Parsing certificate...
	I0703 17:20:07.494676    8716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 17:20:07.494750    8716 main.go:141] libmachine: Decoding PEM data...
	I0703 17:20:07.494766    8716 main.go:141] libmachine: Parsing certificate...
	I0703 17:20:07.495802    8716 cli_runner.go:164] Run: docker network inspect offline-docker-610000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 17:20:07.516979    8716 cli_runner.go:211] docker network inspect offline-docker-610000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 17:20:07.517077    8716 network_create.go:284] running [docker network inspect offline-docker-610000] to gather additional debugging logs...
	I0703 17:20:07.517096    8716 cli_runner.go:164] Run: docker network inspect offline-docker-610000
	W0703 17:20:07.536569    8716 cli_runner.go:211] docker network inspect offline-docker-610000 returned with exit code 1
	I0703 17:20:07.536604    8716 network_create.go:287] error running [docker network inspect offline-docker-610000]: docker network inspect offline-docker-610000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-610000 not found
	I0703 17:20:07.536620    8716 network_create.go:289] output of [docker network inspect offline-docker-610000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-610000 not found
	
	** /stderr **
	I0703 17:20:07.536766    8716 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 17:20:07.557882    8716 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:20:07.559729    8716 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:20:07.561235    8716 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:20:07.561710    8716 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004c3e00}
	I0703 17:20:07.561730    8716 network_create.go:124] attempt to create docker network offline-docker-610000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0703 17:20:07.561846    8716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-610000 offline-docker-610000
	W0703 17:20:07.581508    8716 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-610000 offline-docker-610000 returned with exit code 1
	W0703 17:20:07.581549    8716 network_create.go:149] failed to create docker network offline-docker-610000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-610000 offline-docker-610000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0703 17:20:07.581570    8716 network_create.go:116] failed to create docker network offline-docker-610000 192.168.76.0/24, will retry: subnet is taken
	I0703 17:20:07.582938    8716 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:20:07.583330    8716 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0009104e0}
	I0703 17:20:07.583342    8716 network_create.go:124] attempt to create docker network offline-docker-610000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0703 17:20:07.583418    8716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-610000 offline-docker-610000
	W0703 17:20:07.603258    8716 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-610000 offline-docker-610000 returned with exit code 1
	W0703 17:20:07.603297    8716 network_create.go:149] failed to create docker network offline-docker-610000 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-610000 offline-docker-610000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0703 17:20:07.603316    8716 network_create.go:116] failed to create docker network offline-docker-610000 192.168.85.0/24, will retry: subnet is taken
	I0703 17:20:07.604939    8716 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:20:07.605386    8716 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00047e200}
	I0703 17:20:07.605401    8716 network_create.go:124] attempt to create docker network offline-docker-610000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0703 17:20:07.605472    8716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-610000 offline-docker-610000
	I0703 17:20:07.661075    8716 network_create.go:108] docker network offline-docker-610000 192.168.94.0/24 created
	I0703 17:20:07.661110    8716 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-610000" container
	I0703 17:20:07.661243    8716 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 17:20:07.682674    8716 cli_runner.go:164] Run: docker volume create offline-docker-610000 --label name.minikube.sigs.k8s.io=offline-docker-610000 --label created_by.minikube.sigs.k8s.io=true
	I0703 17:20:07.701776    8716 oci.go:103] Successfully created a docker volume offline-docker-610000
	I0703 17:20:07.701899    8716 cli_runner.go:164] Run: docker run --rm --name offline-docker-610000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-610000 --entrypoint /usr/bin/test -v offline-docker-610000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 17:20:07.954941    8716 oci.go:107] Successfully prepared a docker volume offline-docker-610000
	I0703 17:20:07.954989    8716 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 17:20:07.955003    8716 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 17:20:07.955139    8716 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-610000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 17:26:07.524301    8716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 17:26:07.524428    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:07.546716    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:07.546829    8716 retry.go:31] will retry after 199.761458ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:07.749051    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:07.771242    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:07.771366    8716 retry.go:31] will retry after 268.368159ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:08.041065    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:08.062773    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:08.062879    8716 retry.go:31] will retry after 705.670766ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:08.769542    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:08.791424    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	W0703 17:26:08.791557    8716 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	
	W0703 17:26:08.791578    8716 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:08.791639    8716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 17:26:08.791689    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:08.812458    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:08.812557    8716 retry.go:31] will retry after 364.413146ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:09.178499    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:09.198493    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:09.198598    8716 retry.go:31] will retry after 250.686797ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:09.451737    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:09.474816    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:09.474912    8716 retry.go:31] will retry after 707.248474ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:10.184564    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:10.206833    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:10.206944    8716 retry.go:31] will retry after 482.898153ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:10.692259    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:10.713939    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	W0703 17:26:10.714052    8716 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	
	W0703 17:26:10.714075    8716 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:10.714082    8716 start.go:128] duration metric: took 6m3.214162639s to createHost
	I0703 17:26:10.714149    8716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 17:26:10.714206    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:10.734221    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:10.734315    8716 retry.go:31] will retry after 366.949146ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:11.102520    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:11.123807    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:11.123902    8716 retry.go:31] will retry after 220.60052ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:11.345104    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:11.365978    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:11.366076    8716 retry.go:31] will retry after 465.739286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:11.834240    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:11.856250    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:11.856365    8716 retry.go:31] will retry after 448.908337ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:12.306427    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:12.328066    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	W0703 17:26:12.328169    8716 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	
	W0703 17:26:12.328191    8716 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:12.328248    8716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 17:26:12.328301    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:12.348019    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:12.348122    8716 retry.go:31] will retry after 209.299893ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:12.559786    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:12.581735    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:12.581843    8716 retry.go:31] will retry after 511.871932ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:13.094649    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:13.116330    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	I0703 17:26:13.116458    8716 retry.go:31] will retry after 369.688425ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:13.486412    8716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000
	W0703 17:26:13.507366    8716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000 returned with exit code 1
	W0703 17:26:13.507465    8716 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	
	W0703 17:26:13.507481    8716 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-610000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-610000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000
	I0703 17:26:13.507499    8716 fix.go:56] duration metric: took 6m23.049165836s for fixHost
	I0703 17:26:13.507505    8716 start.go:83] releasing machines lock for "offline-docker-610000", held for 6m23.049212782s
	W0703 17:26:13.507582    8716 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-610000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-610000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0703 17:26:13.551191    8716 out.go:177] 
	W0703 17:26:13.573243    8716 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0703 17:26:13.573296    8716 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0703 17:26:13.573321    8716 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0703 17:26:13.595000    8716 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-610000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-07-03 17:26:13.649581 -0700 PDT m=+6009.464434908
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-610000
helpers_test.go:235: (dbg) docker inspect offline-docker-610000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-610000",
	        "Id": "2cfa244c36c63d5d04d4d35e2cd47d92dac53c5ce566ce953814da3912a51f54",
	        "Created": "2024-07-04T00:20:07.623055626Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-610000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-610000 -n offline-docker-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-610000 -n offline-docker-610000: exit status 7 (76.130915ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 17:26:13.749251    9520 status.go:131] status error: host: state: unknown state "offline-docker-610000": docker container inspect offline-docker-610000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-610000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-610000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-610000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-610000
--- FAIL: TestOffline (753.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (875.66s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-052000 ssh -- ls /minikube-host
E0703 16:11:00.558211    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:12:05.660577    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:15:42.617524    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:16:00.562523    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:17:23.613676    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:20:42.620776    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:21:00.566887    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-052000 ssh -- ls /minikube-host: signal: killed (14m35.374633511s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-052000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-052000
helpers_test.go:235: (dbg) docker inspect mount-start-2-052000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17b68bda1c55e31e5e2b424bddcfc99d4a48de9131454037931f11aa9914efee",
	        "Created": "2024-07-03T23:10:41.678575309Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 129019,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-03T23:10:51.299492553Z",
	            "FinishedAt": "2024-07-03T23:10:49.180183261Z"
	        },
	        "Image": "sha256:a0ca00328aec4685dbd057efbd4f0cb88bdb5a7796daaecd4061e2ae920b8c25",
	        "ResolvConfPath": "/var/lib/docker/containers/17b68bda1c55e31e5e2b424bddcfc99d4a48de9131454037931f11aa9914efee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17b68bda1c55e31e5e2b424bddcfc99d4a48de9131454037931f11aa9914efee/hostname",
	        "HostsPath": "/var/lib/docker/containers/17b68bda1c55e31e5e2b424bddcfc99d4a48de9131454037931f11aa9914efee/hosts",
	        "LogPath": "/var/lib/docker/containers/17b68bda1c55e31e5e2b424bddcfc99d4a48de9131454037931f11aa9914efee/17b68bda1c55e31e5e2b424bddcfc99d4a48de9131454037931f11aa9914efee-json.log",
	        "Name": "/mount-start-2-052000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "mount-start-2-052000:/var",
	                "/host_mnt/Users:/minikube-host",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-052000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/45e8c71ac546194497f80d0ac5dcac6eb19a9dfbf0a93cfeffc8818577d291fe-init/diff:/var/lib/docker/overlay2/56a732c29d2e2acdbf9027be55cf07c4988a7be58120f0ef4199e49e3bf1472b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45e8c71ac546194497f80d0ac5dcac6eb19a9dfbf0a93cfeffc8818577d291fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45e8c71ac546194497f80d0ac5dcac6eb19a9dfbf0a93cfeffc8818577d291fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45e8c71ac546194497f80d0ac5dcac6eb19a9dfbf0a93cfeffc8818577d291fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-052000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-052000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-052000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-052000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-052000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "33f030c02f4d1acd97acf228a93ae023222cc485caa91c08a42c9cb7f185670a",
	            "SandboxKey": "/var/run/docker/netns/33f030c02f4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51644"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51645"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51646"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51647"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51648"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-052000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "40a8623d08e2ba9e737491d4ff6235a6445f8d5db933badf2e274ff5e6361771",
	                    "EndpointID": "622384f4d399bccf12c58f19fa29435db0ae44affbf9e9e58ffd70c2ff52886a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-052000",
	                        "17b68bda1c55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-052000 -n mount-start-2-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-052000 -n mount-start-2-052000: exit status 6 (255.716661ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:25:33.757051    6602 status.go:451] kubeconfig endpoint: get endpoint: "mount-start-2-052000" does not appear in /Users/jenkins/minikube-integration/18998-1161/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-052000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (875.66s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (756.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-966000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0703 16:28:45.664699    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:30:42.617676    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:31:00.561159    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:34:03.615194    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:35:42.618242    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:36:00.561842    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-966000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m35.922929297s)

                                                
                                                
-- stdout --
	* [multinode-966000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-966000" primary control-plane node in "multinode-966000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-966000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 16:26:41.587964    6685 out.go:291] Setting OutFile to fd 1 ...
	I0703 16:26:41.588240    6685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:26:41.588245    6685 out.go:304] Setting ErrFile to fd 2...
	I0703 16:26:41.588249    6685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:26:41.588421    6685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 16:26:41.589857    6685 out.go:298] Setting JSON to false
	I0703 16:26:41.612521    6685 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3370,"bootTime":1720045831,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 16:26:41.612618    6685 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 16:26:41.634128    6685 out.go:177] * [multinode-966000] minikube v1.33.1 on Darwin 14.5
	I0703 16:26:41.676093    6685 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 16:26:41.676137    6685 notify.go:220] Checking for updates...
	I0703 16:26:41.718778    6685 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 16:26:41.739809    6685 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 16:26:41.760926    6685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 16:26:41.782067    6685 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	I0703 16:26:41.803011    6685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 16:26:41.824415    6685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 16:26:41.850912    6685 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 16:26:41.851094    6685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 16:26:41.930377    6685 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:73 SystemTime:2024-07-03 23:26:41.920592382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 16:26:41.972466    6685 out.go:177] * Using the docker driver based on user configuration
	I0703 16:26:41.993452    6685 start.go:297] selected driver: docker
	I0703 16:26:41.993481    6685 start.go:901] validating driver "docker" against <nil>
	I0703 16:26:41.993496    6685 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 16:26:41.997848    6685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 16:26:42.078536    6685 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:73 SystemTime:2024-07-03 23:26:42.068821199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 16:26:42.078731    6685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 16:26:42.078913    6685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 16:26:42.100519    6685 out.go:177] * Using Docker Desktop driver with root privileges
	I0703 16:26:42.121403    6685 cni.go:84] Creating CNI manager for ""
	I0703 16:26:42.121433    6685 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0703 16:26:42.121447    6685 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0703 16:26:42.121543    6685 start.go:340] cluster config:
	{Name:multinode-966000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 16:26:42.165150    6685 out.go:177] * Starting "multinode-966000" primary control-plane node in "multinode-966000" cluster
	I0703 16:26:42.186283    6685 cache.go:121] Beginning downloading kic base image for docker with docker
	I0703 16:26:42.207366    6685 out.go:177] * Pulling base image v0.0.44-1719972989-19184 ...
	I0703 16:26:42.228516    6685 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 16:26:42.228624    6685 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 16:26:42.228621    6685 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0703 16:26:42.228652    6685 cache.go:56] Caching tarball of preloaded images
	I0703 16:26:42.228938    6685 preload.go:173] Found /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0703 16:26:42.228958    6685 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0703 16:26:42.230577    6685 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/multinode-966000/config.json ...
	I0703 16:26:42.230659    6685 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/multinode-966000/config.json: {Name:mk939c70f9080c4a9efc04f220f00540c41d4b61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 16:26:42.249656    6685 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon, skipping pull
	I0703 16:26:42.249674    6685 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 exists in daemon, skipping load
	I0703 16:26:42.249697    6685 cache.go:194] Successfully downloaded all kic artifacts
	I0703 16:26:42.249747    6685 start.go:360] acquireMachinesLock for multinode-966000: {Name:mk9a872cb80fb41099765e3cf2904deb4ec994cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 16:26:42.249915    6685 start.go:364] duration metric: took 155.949µs to acquireMachinesLock for "multinode-966000"
	I0703 16:26:42.249942    6685 start.go:93] Provisioning new machine with config: &{Name:multinode-966000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-966000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0703 16:26:42.250233    6685 start.go:125] createHost starting for "" (driver="docker")
	I0703 16:26:42.292135    6685 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0703 16:26:42.292401    6685 start.go:159] libmachine.API.Create for "multinode-966000" (driver="docker")
	I0703 16:26:42.292436    6685 client.go:168] LocalClient.Create starting
	I0703 16:26:42.292611    6685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 16:26:42.292686    6685 main.go:141] libmachine: Decoding PEM data...
	I0703 16:26:42.292703    6685 main.go:141] libmachine: Parsing certificate...
	I0703 16:26:42.292766    6685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 16:26:42.292806    6685 main.go:141] libmachine: Decoding PEM data...
	I0703 16:26:42.292813    6685 main.go:141] libmachine: Parsing certificate...
	I0703 16:26:42.293274    6685 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 16:26:42.312643    6685 cli_runner.go:211] docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 16:26:42.312749    6685 network_create.go:284] running [docker network inspect multinode-966000] to gather additional debugging logs...
	I0703 16:26:42.312766    6685 cli_runner.go:164] Run: docker network inspect multinode-966000
	W0703 16:26:42.332245    6685 cli_runner.go:211] docker network inspect multinode-966000 returned with exit code 1
	I0703 16:26:42.332272    6685 network_create.go:287] error running [docker network inspect multinode-966000]: docker network inspect multinode-966000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-966000 not found
	I0703 16:26:42.332293    6685 network_create.go:289] output of [docker network inspect multinode-966000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-966000 not found
	
	** /stderr **
	I0703 16:26:42.332414    6685 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:26:42.354225    6685 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:26:42.355643    6685 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:26:42.356037    6685 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016125f0}
	I0703 16:26:42.356084    6685 network_create.go:124] attempt to create docker network multinode-966000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0703 16:26:42.356167    6685 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000
	I0703 16:26:42.411060    6685 network_create.go:108] docker network multinode-966000 192.168.67.0/24 created
	I0703 16:26:42.411097    6685 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-966000" container
	I0703 16:26:42.411219    6685 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 16:26:42.431039    6685 cli_runner.go:164] Run: docker volume create multinode-966000 --label name.minikube.sigs.k8s.io=multinode-966000 --label created_by.minikube.sigs.k8s.io=true
	I0703 16:26:42.451756    6685 oci.go:103] Successfully created a docker volume multinode-966000
	I0703 16:26:42.451868    6685 cli_runner.go:164] Run: docker run --rm --name multinode-966000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-966000 --entrypoint /usr/bin/test -v multinode-966000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 16:26:42.774429    6685 oci.go:107] Successfully prepared a docker volume multinode-966000
	I0703 16:26:42.774492    6685 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 16:26:42.774508    6685 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 16:26:42.774600    6685 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-966000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 16:32:42.294195    6685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:32:42.294353    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:42.316060    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:32:42.316185    6685 retry.go:31] will retry after 354.506875ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:42.671583    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:42.694165    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:32:42.694275    6685 retry.go:31] will retry after 367.492446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:43.064196    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:43.086487    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:32:43.086583    6685 retry.go:31] will retry after 665.089243ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:43.752189    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:43.773922    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:32:43.774040    6685 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:32:43.774059    6685 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:43.774124    6685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 16:32:43.774213    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:43.794516    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:32:43.794611    6685 retry.go:31] will retry after 268.833546ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:44.065051    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:44.087622    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:32:44.087711    6685 retry.go:31] will retry after 357.953414ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:44.447585    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:44.469516    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:32:44.469609    6685 retry.go:31] will retry after 456.164711ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:44.926814    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:44.949146    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:32:44.949256    6685 retry.go:31] will retry after 726.391006ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:45.678101    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:32:45.699945    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:32:45.700050    6685 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:32:45.700069    6685 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:45.700085    6685 start.go:128] duration metric: took 6m3.448461972s to createHost
	I0703 16:32:45.700091    6685 start.go:83] releasing machines lock for "multinode-966000", held for 6m3.448802761s
	W0703 16:32:45.700105    6685 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0703 16:32:45.700533    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:45.719999    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:32:45.720064    6685 delete.go:82] Unable to get host status for multinode-966000, assuming it has already been deleted: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	W0703 16:32:45.720151    6685 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0703 16:32:45.720158    6685 start.go:728] Will try again in 5 seconds ...
	I0703 16:32:50.722380    6685 start.go:360] acquireMachinesLock for multinode-966000: {Name:mk9a872cb80fb41099765e3cf2904deb4ec994cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 16:32:50.723205    6685 start.go:364] duration metric: took 773.161µs to acquireMachinesLock for "multinode-966000"
	I0703 16:32:50.723316    6685 start.go:96] Skipping create...Using existing machine configuration
	I0703 16:32:50.723338    6685 fix.go:54] fixHost starting: 
	I0703 16:32:50.723828    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:50.745697    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:32:50.745746    6685 fix.go:112] recreateIfNeeded on multinode-966000: state= err=unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:50.745763    6685 fix.go:117] machineExists: false. err=machine does not exist
	I0703 16:32:50.767398    6685 out.go:177] * docker "multinode-966000" container is missing, will recreate.
	I0703 16:32:50.809167    6685 delete.go:124] DEMOLISHING multinode-966000 ...
	I0703 16:32:50.809342    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:50.830121    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	W0703 16:32:50.830176    6685 stop.go:83] unable to get state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:50.830196    6685 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:50.830579    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:50.849836    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:32:50.849884    6685 delete.go:82] Unable to get host status for multinode-966000, assuming it has already been deleted: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:50.849974    6685 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-966000
	W0703 16:32:50.868956    6685 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-966000 returned with exit code 1
	I0703 16:32:50.868994    6685 kic.go:371] could not find the container multinode-966000 to remove it. will try anyways
	I0703 16:32:50.869081    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:50.888161    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	W0703 16:32:50.888210    6685 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:50.888299    6685 cli_runner.go:164] Run: docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0"
	W0703 16:32:50.907655    6685 cli_runner.go:211] docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0703 16:32:50.907692    6685 oci.go:650] error shutdown multinode-966000: docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:51.908984    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:51.929233    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:32:51.929278    6685 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:51.929289    6685 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:32:51.929310    6685 retry.go:31] will retry after 410.212718ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:52.341931    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:52.363896    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:32:52.363938    6685 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:52.363948    6685 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:32:52.363974    6685 retry.go:31] will retry after 934.65151ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:53.299793    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:53.321020    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:32:53.321085    6685 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:53.321097    6685 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:32:53.321121    6685 retry.go:31] will retry after 1.281197753s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:54.604773    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:54.627063    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:32:54.627104    6685 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:54.627115    6685 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:32:54.627143    6685 retry.go:31] will retry after 1.988859695s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:56.618335    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:32:56.640533    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:32:56.640576    6685 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:32:56.640591    6685 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:32:56.640614    6685 retry.go:31] will retry after 3.476633804s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:33:00.118113    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:33:00.139729    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:33:00.139777    6685 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:33:00.139787    6685 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:33:00.139813    6685 retry.go:31] will retry after 2.097176209s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:33:02.239455    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:33:02.261493    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:33:02.261544    6685 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:33:02.261554    6685 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:33:02.261581    6685 retry.go:31] will retry after 7.182264283s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:33:09.444121    6685 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:33:09.465109    6685 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:33:09.465152    6685 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:33:09.465162    6685 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:33:09.465192    6685 oci.go:88] couldn't shut down multinode-966000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	 
	I0703 16:33:09.465281    6685 cli_runner.go:164] Run: docker rm -f -v multinode-966000
	I0703 16:33:09.486153    6685 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-966000
	W0703 16:33:09.506373    6685 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-966000 returned with exit code 1
	I0703 16:33:09.506488    6685 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:33:09.526328    6685 cli_runner.go:164] Run: docker network rm multinode-966000
	I0703 16:33:09.604796    6685 fix.go:124] Sleeping 1 second for extra luck!
	I0703 16:33:10.606957    6685 start.go:125] createHost starting for "" (driver="docker")
	I0703 16:33:10.630193    6685 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0703 16:33:10.630376    6685 start.go:159] libmachine.API.Create for "multinode-966000" (driver="docker")
	I0703 16:33:10.630403    6685 client.go:168] LocalClient.Create starting
	I0703 16:33:10.630626    6685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 16:33:10.630723    6685 main.go:141] libmachine: Decoding PEM data...
	I0703 16:33:10.630751    6685 main.go:141] libmachine: Parsing certificate...
	I0703 16:33:10.630836    6685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 16:33:10.630917    6685 main.go:141] libmachine: Decoding PEM data...
	I0703 16:33:10.630931    6685 main.go:141] libmachine: Parsing certificate...
	I0703 16:33:10.651897    6685 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 16:33:10.673825    6685 cli_runner.go:211] docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 16:33:10.673915    6685 network_create.go:284] running [docker network inspect multinode-966000] to gather additional debugging logs...
	I0703 16:33:10.673934    6685 cli_runner.go:164] Run: docker network inspect multinode-966000
	W0703 16:33:10.694003    6685 cli_runner.go:211] docker network inspect multinode-966000 returned with exit code 1
	I0703 16:33:10.694040    6685 network_create.go:287] error running [docker network inspect multinode-966000]: docker network inspect multinode-966000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-966000 not found
	I0703 16:33:10.694052    6685 network_create.go:289] output of [docker network inspect multinode-966000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-966000 not found
	
	** /stderr **
	I0703 16:33:10.694183    6685 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:33:10.715731    6685 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:33:10.717323    6685 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:33:10.718879    6685 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:33:10.719235    6685 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00159f650}
	I0703 16:33:10.719248    6685 network_create.go:124] attempt to create docker network multinode-966000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0703 16:33:10.719315    6685 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000
	W0703 16:33:10.739096    6685 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000 returned with exit code 1
	W0703 16:33:10.739142    6685 network_create.go:149] failed to create docker network multinode-966000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0703 16:33:10.739161    6685 network_create.go:116] failed to create docker network multinode-966000 192.168.76.0/24, will retry: subnet is taken
	I0703 16:33:10.740546    6685 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:33:10.741020    6685 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001560360}
	I0703 16:33:10.741034    6685 network_create.go:124] attempt to create docker network multinode-966000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0703 16:33:10.741128    6685 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000
	I0703 16:33:10.797187    6685 network_create.go:108] docker network multinode-966000 192.168.85.0/24 created
	I0703 16:33:10.797217    6685 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-966000" container
	I0703 16:33:10.797335    6685 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 16:33:10.817374    6685 cli_runner.go:164] Run: docker volume create multinode-966000 --label name.minikube.sigs.k8s.io=multinode-966000 --label created_by.minikube.sigs.k8s.io=true
	I0703 16:33:10.836606    6685 oci.go:103] Successfully created a docker volume multinode-966000
	I0703 16:33:10.836740    6685 cli_runner.go:164] Run: docker run --rm --name multinode-966000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-966000 --entrypoint /usr/bin/test -v multinode-966000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 16:33:11.097582    6685 oci.go:107] Successfully prepared a docker volume multinode-966000
	I0703 16:33:11.097632    6685 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 16:33:11.097645    6685 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 16:33:11.097744    6685 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-966000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 16:39:10.632130    6685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:39:10.632265    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:10.655306    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:10.655423    6685 retry.go:31] will retry after 351.177807ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:11.009025    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:11.031149    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:11.031249    6685 retry.go:31] will retry after 266.18722ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:11.297751    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:11.319279    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:11.319387    6685 retry.go:31] will retry after 592.072298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:11.913414    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:11.934380    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:39:11.934486    6685 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:39:11.934506    6685 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:11.934566    6685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 16:39:11.934626    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:11.953859    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:11.953951    6685 retry.go:31] will retry after 211.974415ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:12.167114    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:12.187970    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:12.188084    6685 retry.go:31] will retry after 220.14584ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:12.409578    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:12.430989    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:12.431089    6685 retry.go:31] will retry after 622.962961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:13.054821    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:13.076573    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:13.076686    6685 retry.go:31] will retry after 799.787597ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:13.878391    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:13.899798    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:39:13.899904    6685 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:39:13.899919    6685 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:13.899929    6685 start.go:128] duration metric: took 6m3.291558843s to createHost
	I0703 16:39:13.899994    6685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:39:13.900046    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:13.919957    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:13.920051    6685 retry.go:31] will retry after 205.417727ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:14.127921    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:14.150131    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:14.150225    6685 retry.go:31] will retry after 369.645986ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:14.520283    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:14.541608    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:14.541697    6685 retry.go:31] will retry after 626.973246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:15.168957    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:15.190050    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:15.190154    6685 retry.go:31] will retry after 471.699177ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:15.664229    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:15.686213    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:39:15.686324    6685 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:39:15.686339    6685 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:15.686404    6685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 16:39:15.686462    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:15.707395    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:15.707486    6685 retry.go:31] will retry after 366.190702ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:16.074039    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:16.095374    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:16.095490    6685 retry.go:31] will retry after 472.961343ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:16.570342    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:16.592267    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:39:16.592355    6685 retry.go:31] will retry after 694.953715ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:17.288589    6685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:39:17.310755    6685 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:39:17.310857    6685 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:39:17.310871    6685 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:39:17.310887    6685 fix.go:56] duration metric: took 6m26.58609379s for fixHost
	I0703 16:39:17.310893    6685 start.go:83] releasing machines lock for "multinode-966000", held for 6m26.586201552s
	W0703 16:39:17.310966    6685 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-966000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-966000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0703 16:39:17.354572    6685 out.go:177] 
	W0703 16:39:17.376395    6685 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0703 16:39:17.376446    6685 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0703 16:39:17.376482    6685 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0703 16:39:17.397451    6685 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-966000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "4013a0e32c7823a94fe8a0b25be2b80809b27648a4d58b466a23cfbeba9e63b7",
	        "Created": "2024-07-03T23:33:10.758888038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (74.870922ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:39:17.570632    7002 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (756.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (107.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (98.806676ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-966000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- rollout status deployment/busybox: exit status 1 (99.413979ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.898681ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:39:17.868949    1695 retry.go:31] will retry after 1.299612852s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.777339ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:39:19.270828    1695 retry.go:31] will retry after 1.251869485s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.254737ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:39:20.628638    1695 retry.go:31] will retry after 2.449061462s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.988501ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:39:23.182030    1695 retry.go:31] will retry after 3.697149005s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.436144ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:39:26.983611    1695 retry.go:31] will retry after 5.389389176s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.41535ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:39:32.475306    1695 retry.go:31] will retry after 9.874972685s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.25807ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:39:42.456113    1695 retry.go:31] will retry after 7.361108729s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.161049ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:39:49.923731    1695 retry.go:31] will retry after 20.864820122s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.233571ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:40:10.890774    1695 retry.go:31] will retry after 25.965820848s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.925605ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0703 16:40:36.964070    1695 retry.go:31] will retry after 27.639329299s: failed to retrieve Pod IPs (may be temporary): exit status 1
E0703 16:40:42.619402    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:41:00.564178    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.445701ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.133723ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- exec  -- nslookup kubernetes.io: exit status 1 (98.41458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- exec  -- nslookup kubernetes.default: exit status 1 (97.964947ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (98.337215ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "4013a0e32c7823a94fe8a0b25be2b80809b27648a4d58b466a23cfbeba9e63b7",
	        "Created": "2024-07-03T23:33:10.758888038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (77.309768ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:41:05.204331    7092 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (107.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-966000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (99.317636ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-966000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "4013a0e32c7823a94fe8a0b25be2b80809b27648a4d58b466a23cfbeba9e63b7",
	        "Created": "2024-07-03T23:33:10.758888038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (81.244516ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:41:05.408543    7100 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-966000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-966000 -v 3 --alsologtostderr: exit status 80 (171.221181ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 16:41:05.469589    7103 out.go:291] Setting OutFile to fd 1 ...
	I0703 16:41:05.469802    7103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:41:05.469808    7103 out.go:304] Setting ErrFile to fd 2...
	I0703 16:41:05.469812    7103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:41:05.470007    7103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 16:41:05.470375    7103 mustload.go:65] Loading cluster: multinode-966000
	I0703 16:41:05.470650    7103 config.go:182] Loaded profile config "multinode-966000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 16:41:05.471023    7103 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:05.491621    7103 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:05.515704    7103 out.go:177] 
	W0703 16:41:05.536646    7103 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-966000 host status: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-966000 host status: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	I0703 16:41:05.558747    7103 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-966000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "4013a0e32c7823a94fe8a0b25be2b80809b27648a4d58b466a23cfbeba9e63b7",
	        "Created": "2024-07-03T23:33:10.758888038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (74.988291ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:41:05.679004    7107 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-966000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-966000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.750343ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-966000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-966000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-966000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "4013a0e32c7823a94fe8a0b25be2b80809b27648a4d58b466a23cfbeba9e63b7",
	        "Created": "2024-07-03T23:33:10.758888038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (76.000173ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:41:05.815399    7112 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-966000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-052000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-966000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-966000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"multinode-966000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "4013a0e32c7823a94fe8a0b25be2b80809b27648a4d58b466a23cfbeba9e63b7",
	        "Created": "2024-07-03T23:33:10.758888038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (75.158542ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:41:06.034672    7120 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-966000 node stop m03: exit status 85 (148.385024ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-966000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 status
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr": 
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr": 
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "4013a0e32c7823a94fe8a0b25be2b80809b27648a4d58b466a23cfbeba9e63b7",
	        "Created": "2024-07-03T23:33:10.758888038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (76.769093ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:41:06.514572    7140 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-966000 node start m03 -v=7 --alsologtostderr: exit status 85 (145.256468ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 16:41:06.569044    7143 out.go:291] Setting OutFile to fd 1 ...
	I0703 16:41:06.569393    7143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:41:06.569399    7143 out.go:304] Setting ErrFile to fd 2...
	I0703 16:41:06.569403    7143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:41:06.569569    7143 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 16:41:06.569886    7143 mustload.go:65] Loading cluster: multinode-966000
	I0703 16:41:06.570168    7143 config.go:182] Loaded profile config "multinode-966000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 16:41:06.591600    7143 out.go:177] 
	W0703 16:41:06.613280    7143 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0703 16:41:06.613303    7143 out.go:239] * 
	* 
	W0703 16:41:06.617070    7143 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0703 16:41:06.638304    7143 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0703 16:41:06.569044    7143 out.go:291] Setting OutFile to fd 1 ...
I0703 16:41:06.569393    7143 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 16:41:06.569399    7143 out.go:304] Setting ErrFile to fd 2...
I0703 16:41:06.569403    7143 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 16:41:06.569569    7143 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
I0703 16:41:06.569886    7143 mustload.go:65] Loading cluster: multinode-966000
I0703 16:41:06.570168    7143 config.go:182] Loaded profile config "multinode-966000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 16:41:06.591600    7143 out.go:177] 
W0703 16:41:06.613280    7143 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0703 16:41:06.613303    7143 out.go:239] * 
* 
W0703 16:41:06.617070    7143 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0703 16:41:06.638304    7143 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-966000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 status -v=7 --alsologtostderr
multinode_test.go:298: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-966000 status -v=7 --alsologtostderr": 
multinode_test.go:302: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-966000 status -v=7 --alsologtostderr": 
multinode_test.go:306: (dbg) Run:  kubectl get nodes
multinode_test.go:306: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (46.206821ms)

                                                
                                                
** stderr ** 
	E0703 16:41:06.780777    7148 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E0703 16:41:06.781363    7148 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E0703 16:41:06.782385    7148 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E0703 16:41:06.782761    7148 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E0703 16:41:06.783941    7148 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
multinode_test.go:308: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "4013a0e32c7823a94fe8a0b25be2b80809b27648a4d58b466a23cfbeba9e63b7",
	        "Created": "2024-07-03T23:33:10.758888038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (75.528406ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:41:06.881465    7150 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (792.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-966000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-966000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-966000: exit status 82 (14.928958211s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-966000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-966000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-966000 --wait=true -v=8 --alsologtostderr
E0703 16:45:25.670295    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:45:42.620329    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:46:00.566202    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:50:42.679143    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:50:43.676048    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:51:00.622259    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-966000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m57.000767237s)

                                                
                                                
-- stdout --
	* [multinode-966000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-966000" primary control-plane node in "multinode-966000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* docker "multinode-966000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-966000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 16:41:21.920999    7164 out.go:291] Setting OutFile to fd 1 ...
	I0703 16:41:21.921250    7164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:41:21.921256    7164 out.go:304] Setting ErrFile to fd 2...
	I0703 16:41:21.921260    7164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:41:21.921427    7164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 16:41:21.922778    7164 out.go:298] Setting JSON to false
	I0703 16:41:21.945281    7164 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4250,"bootTime":1720045831,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 16:41:21.945372    7164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 16:41:21.967194    7164 out.go:177] * [multinode-966000] minikube v1.33.1 on Darwin 14.5
	I0703 16:41:22.009007    7164 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 16:41:22.009046    7164 notify.go:220] Checking for updates...
	I0703 16:41:22.051718    7164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 16:41:22.074715    7164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 16:41:22.095090    7164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 16:41:22.115987    7164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	I0703 16:41:22.136896    7164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 16:41:22.158754    7164 config.go:182] Loaded profile config "multinode-966000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 16:41:22.158940    7164 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 16:41:22.185659    7164 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 16:41:22.185833    7164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 16:41:22.265966    7164 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:93 SystemTime:2024-07-03 23:41:22.257114032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 16:41:22.287172    7164 out.go:177] * Using the docker driver based on existing profile
	I0703 16:41:22.308118    7164 start.go:297] selected driver: docker
	I0703 16:41:22.308146    7164 start.go:901] validating driver "docker" against &{Name:multinode-966000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-966000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 16:41:22.308246    7164 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 16:41:22.308457    7164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 16:41:22.390015    7164 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:93 SystemTime:2024-07-03 23:41:22.380658216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 16:41:22.393018    7164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 16:41:22.393052    7164 cni.go:84] Creating CNI manager for ""
	I0703 16:41:22.393060    7164 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0703 16:41:22.393118    7164 start.go:340] cluster config:
	{Name:multinode-966000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 16:41:22.415036    7164 out.go:177] * Starting "multinode-966000" primary control-plane node in "multinode-966000" cluster
	I0703 16:41:22.436358    7164 cache.go:121] Beginning downloading kic base image for docker with docker
	I0703 16:41:22.457514    7164 out.go:177] * Pulling base image v0.0.44-1719972989-19184 ...
	I0703 16:41:22.478661    7164 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 16:41:22.478735    7164 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 16:41:22.478754    7164 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0703 16:41:22.478778    7164 cache.go:56] Caching tarball of preloaded images
	I0703 16:41:22.479021    7164 preload.go:173] Found /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0703 16:41:22.479040    7164 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0703 16:41:22.479893    7164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/multinode-966000/config.json ...
	I0703 16:41:22.499698    7164 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon, skipping pull
	I0703 16:41:22.499730    7164 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 exists in daemon, skipping load
	I0703 16:41:22.499752    7164 cache.go:194] Successfully downloaded all kic artifacts
	I0703 16:41:22.499794    7164 start.go:360] acquireMachinesLock for multinode-966000: {Name:mk9a872cb80fb41099765e3cf2904deb4ec994cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 16:41:22.499901    7164 start.go:364] duration metric: took 88.594µs to acquireMachinesLock for "multinode-966000"
	I0703 16:41:22.499925    7164 start.go:96] Skipping create...Using existing machine configuration
	I0703 16:41:22.499935    7164 fix.go:54] fixHost starting: 
	I0703 16:41:22.500184    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:22.519102    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:22.519155    7164 fix.go:112] recreateIfNeeded on multinode-966000: state= err=unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:22.519179    7164 fix.go:117] machineExists: false. err=machine does not exist
	I0703 16:41:22.540432    7164 out.go:177] * docker "multinode-966000" container is missing, will recreate.
	I0703 16:41:22.582636    7164 delete.go:124] DEMOLISHING multinode-966000 ...
	I0703 16:41:22.582806    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:22.603766    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	W0703 16:41:22.603811    7164 stop.go:83] unable to get state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:22.603842    7164 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:22.604203    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:22.624061    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:22.624112    7164 delete.go:82] Unable to get host status for multinode-966000, assuming it has already been deleted: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:22.624201    7164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-966000
	W0703 16:41:22.644735    7164 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-966000 returned with exit code 1
	I0703 16:41:22.644777    7164 kic.go:371] could not find the container multinode-966000 to remove it. will try anyways
	I0703 16:41:22.644856    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:22.665266    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	W0703 16:41:22.665313    7164 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:22.665393    7164 cli_runner.go:164] Run: docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0"
	W0703 16:41:22.684698    7164 cli_runner.go:211] docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0703 16:41:22.684725    7164 oci.go:650] error shutdown multinode-966000: docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:23.685275    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:23.706472    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:23.706516    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:23.706535    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:41:23.706571    7164 retry.go:31] will retry after 735.330646ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:24.444299    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:24.466388    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:24.466432    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:24.466440    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:41:24.466468    7164 retry.go:31] will retry after 700.439278ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:25.167140    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:25.188154    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:25.188201    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:25.188210    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:41:25.188235    7164 retry.go:31] will retry after 1.281390417s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:26.471975    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:26.493525    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:26.493569    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:26.493585    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:41:26.493613    7164 retry.go:31] will retry after 2.384040511s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:28.878931    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:28.900132    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:28.900174    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:28.900194    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:41:28.900221    7164 retry.go:31] will retry after 3.122771574s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:32.023603    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:32.045376    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:32.045430    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:32.045439    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:41:32.045466    7164 retry.go:31] will retry after 4.813648277s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:36.861582    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:36.883560    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:36.883604    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:36.883612    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:41:36.883640    7164 retry.go:31] will retry after 5.984705938s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:42.869139    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:41:42.890225    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:41:42.890267    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:41:42.890274    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:41:42.890307    7164 oci.go:88] couldn't shut down multinode-966000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	 
	I0703 16:41:42.890381    7164 cli_runner.go:164] Run: docker rm -f -v multinode-966000
	I0703 16:41:42.910121    7164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-966000
	W0703 16:41:42.929494    7164 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-966000 returned with exit code 1
	I0703 16:41:42.929612    7164 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:41:42.949265    7164 cli_runner.go:164] Run: docker network rm multinode-966000
	I0703 16:41:43.026769    7164 fix.go:124] Sleeping 1 second for extra luck!
	I0703 16:41:44.027481    7164 start.go:125] createHost starting for "" (driver="docker")
	I0703 16:41:44.050725    7164 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0703 16:41:44.050903    7164 start.go:159] libmachine.API.Create for "multinode-966000" (driver="docker")
	I0703 16:41:44.050971    7164 client.go:168] LocalClient.Create starting
	I0703 16:41:44.051189    7164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 16:41:44.051294    7164 main.go:141] libmachine: Decoding PEM data...
	I0703 16:41:44.051326    7164 main.go:141] libmachine: Parsing certificate...
	I0703 16:41:44.051414    7164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 16:41:44.051480    7164 main.go:141] libmachine: Decoding PEM data...
	I0703 16:41:44.051490    7164 main.go:141] libmachine: Parsing certificate...
	I0703 16:41:44.052043    7164 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 16:41:44.073977    7164 cli_runner.go:211] docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 16:41:44.074067    7164 network_create.go:284] running [docker network inspect multinode-966000] to gather additional debugging logs...
	I0703 16:41:44.074083    7164 cli_runner.go:164] Run: docker network inspect multinode-966000
	W0703 16:41:44.094689    7164 cli_runner.go:211] docker network inspect multinode-966000 returned with exit code 1
	I0703 16:41:44.094719    7164 network_create.go:287] error running [docker network inspect multinode-966000]: docker network inspect multinode-966000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-966000 not found
	I0703 16:41:44.094734    7164 network_create.go:289] output of [docker network inspect multinode-966000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-966000 not found
	
	** /stderr **
	I0703 16:41:44.094870    7164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:41:44.116787    7164 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:41:44.118458    7164 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:41:44.118927    7164 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015b5890}
	I0703 16:41:44.118986    7164 network_create.go:124] attempt to create docker network multinode-966000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0703 16:41:44.119134    7164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000
	I0703 16:41:44.174464    7164 network_create.go:108] docker network multinode-966000 192.168.67.0/24 created
	I0703 16:41:44.174515    7164 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-966000" container
	I0703 16:41:44.174632    7164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 16:41:44.194780    7164 cli_runner.go:164] Run: docker volume create multinode-966000 --label name.minikube.sigs.k8s.io=multinode-966000 --label created_by.minikube.sigs.k8s.io=true
	I0703 16:41:44.213867    7164 oci.go:103] Successfully created a docker volume multinode-966000
	I0703 16:41:44.213987    7164 cli_runner.go:164] Run: docker run --rm --name multinode-966000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-966000 --entrypoint /usr/bin/test -v multinode-966000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 16:41:44.460981    7164 oci.go:107] Successfully prepared a docker volume multinode-966000
	I0703 16:41:44.461036    7164 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 16:41:44.461059    7164 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 16:41:44.461176    7164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-966000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 16:47:44.054687    7164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:47:44.054818    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:44.076803    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:44.076929    7164 retry.go:31] will retry after 150.927482ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:44.230181    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:44.252706    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:44.252815    7164 retry.go:31] will retry after 444.412033ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:44.697599    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:44.719885    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:44.720000    7164 retry.go:31] will retry after 464.804345ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:45.185837    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:45.207057    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:45.207154    7164 retry.go:31] will retry after 808.381183ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:46.016614    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:46.038239    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:47:46.038357    7164 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:47:46.038375    7164 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:46.038439    7164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 16:47:46.038512    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:46.058430    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:46.058530    7164 retry.go:31] will retry after 299.361968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:46.359628    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:46.381398    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:46.381497    7164 retry.go:31] will retry after 271.712588ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:46.655605    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:46.677651    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:46.677747    7164 retry.go:31] will retry after 368.293953ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:47.048430    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:47.071617    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:47.071718    7164 retry.go:31] will retry after 938.937611ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:48.013036    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:48.035129    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:47:48.035242    7164 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:47:48.035261    7164 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:48.035297    7164 start.go:128] duration metric: took 6m4.006416636s to createHost
	I0703 16:47:48.035365    7164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:47:48.035426    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:48.056131    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:48.056228    7164 retry.go:31] will retry after 270.330748ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:48.326909    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:48.348604    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:48.348703    7164 retry.go:31] will retry after 253.495956ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:48.602832    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:48.623990    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:48.624088    7164 retry.go:31] will retry after 463.909093ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:49.089235    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:49.110966    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:47:49.111108    7164 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:47:49.111126    7164 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:49.111191    7164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 16:47:49.111249    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:49.130476    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:49.130588    7164 retry.go:31] will retry after 345.597475ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:49.478471    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:49.500601    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:49.500706    7164 retry.go:31] will retry after 441.652282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:49.944112    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:49.966405    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:47:49.966498    7164 retry.go:31] will retry after 642.89416ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:50.611072    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:47:50.632516    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:47:50.632629    7164 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:47:50.632646    7164 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:50.632657    7164 fix.go:56] duration metric: took 6m28.131262572s for fixHost
	I0703 16:47:50.632663    7164 start.go:83] releasing machines lock for "multinode-966000", held for 6m28.131292982s
	W0703 16:47:50.632678    7164 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0703 16:47:50.632754    7164 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0703 16:47:50.632761    7164 start.go:728] Will try again in 5 seconds ...
	I0703 16:47:55.633110    7164 start.go:360] acquireMachinesLock for multinode-966000: {Name:mk9a872cb80fb41099765e3cf2904deb4ec994cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 16:47:55.633275    7164 start.go:364] duration metric: took 134.102µs to acquireMachinesLock for "multinode-966000"
	I0703 16:47:55.633313    7164 start.go:96] Skipping create...Using existing machine configuration
	I0703 16:47:55.633319    7164 fix.go:54] fixHost starting: 
	I0703 16:47:55.633640    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:55.654427    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:47:55.654470    7164 fix.go:112] recreateIfNeeded on multinode-966000: state= err=unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:55.654486    7164 fix.go:117] machineExists: false. err=machine does not exist
	I0703 16:47:55.676219    7164 out.go:177] * docker "multinode-966000" container is missing, will recreate.
	I0703 16:47:55.717710    7164 delete.go:124] DEMOLISHING multinode-966000 ...
	I0703 16:47:55.717882    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:55.738177    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	W0703 16:47:55.738222    7164 stop.go:83] unable to get state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:55.738241    7164 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:55.738622    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:55.757894    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:47:55.757959    7164 delete.go:82] Unable to get host status for multinode-966000, assuming it has already been deleted: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:55.758043    7164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-966000
	W0703 16:47:55.777296    7164 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-966000 returned with exit code 1
	I0703 16:47:55.777326    7164 kic.go:371] could not find the container multinode-966000 to remove it. will try anyways
	I0703 16:47:55.777398    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:55.796624    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	W0703 16:47:55.796666    7164 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:55.796752    7164 cli_runner.go:164] Run: docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0"
	W0703 16:47:55.815891    7164 cli_runner.go:211] docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0703 16:47:55.815919    7164 oci.go:650] error shutdown multinode-966000: docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:56.818310    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:56.839969    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:47:56.840014    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:56.840022    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:47:56.840046    7164 retry.go:31] will retry after 419.070296ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:57.260118    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:57.281786    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:47:57.281829    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:57.281841    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:47:57.281865    7164 retry.go:31] will retry after 741.55823ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:58.025849    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:58.047533    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:47:58.047583    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:58.047596    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:47:58.047622    7164 retry.go:31] will retry after 669.541158ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:58.719594    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:58.741676    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:47:58.741720    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:58.741728    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:47:58.741760    7164 retry.go:31] will retry after 1.07957384s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:59.823598    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:47:59.845613    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:47:59.845656    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:47:59.845667    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:47:59.845691    7164 retry.go:31] will retry after 3.288673115s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:48:03.134736    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:48:03.156388    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:48:03.156437    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:48:03.156453    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:48:03.156477    7164 retry.go:31] will retry after 3.004015783s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:48:06.161508    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:48:06.182965    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:48:06.183010    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:48:06.183020    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:48:06.183046    7164 retry.go:31] will retry after 4.988563588s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:48:11.174094    7164 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:48:11.195622    7164 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:48:11.195671    7164 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:48:11.195680    7164 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:48:11.195712    7164 oci.go:88] couldn't shut down multinode-966000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	 
	I0703 16:48:11.195798    7164 cli_runner.go:164] Run: docker rm -f -v multinode-966000
	I0703 16:48:11.215958    7164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-966000
	W0703 16:48:11.235347    7164 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-966000 returned with exit code 1
	I0703 16:48:11.235454    7164 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:48:11.255041    7164 cli_runner.go:164] Run: docker network rm multinode-966000
	I0703 16:48:11.332297    7164 fix.go:124] Sleeping 1 second for extra luck!
	I0703 16:48:12.332891    7164 start.go:125] createHost starting for "" (driver="docker")
	I0703 16:48:12.355051    7164 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0703 16:48:12.355225    7164 start.go:159] libmachine.API.Create for "multinode-966000" (driver="docker")
	I0703 16:48:12.355253    7164 client.go:168] LocalClient.Create starting
	I0703 16:48:12.355475    7164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 16:48:12.355576    7164 main.go:141] libmachine: Decoding PEM data...
	I0703 16:48:12.355606    7164 main.go:141] libmachine: Parsing certificate...
	I0703 16:48:12.355702    7164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 16:48:12.355784    7164 main.go:141] libmachine: Decoding PEM data...
	I0703 16:48:12.355799    7164 main.go:141] libmachine: Parsing certificate...
	I0703 16:48:12.377227    7164 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 16:48:12.398648    7164 cli_runner.go:211] docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 16:48:12.398740    7164 network_create.go:284] running [docker network inspect multinode-966000] to gather additional debugging logs...
	I0703 16:48:12.398757    7164 cli_runner.go:164] Run: docker network inspect multinode-966000
	W0703 16:48:12.418717    7164 cli_runner.go:211] docker network inspect multinode-966000 returned with exit code 1
	I0703 16:48:12.418743    7164 network_create.go:287] error running [docker network inspect multinode-966000]: docker network inspect multinode-966000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-966000 not found
	I0703 16:48:12.418757    7164 network_create.go:289] output of [docker network inspect multinode-966000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-966000 not found
	
	** /stderr **
	I0703 16:48:12.418896    7164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:48:12.440162    7164 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:48:12.441771    7164 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:48:12.443345    7164 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:48:12.443675    7164 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014d0630}
	I0703 16:48:12.443690    7164 network_create.go:124] attempt to create docker network multinode-966000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0703 16:48:12.443768    7164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000
	W0703 16:48:12.463173    7164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000 returned with exit code 1
	W0703 16:48:12.463212    7164 network_create.go:149] failed to create docker network multinode-966000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0703 16:48:12.463230    7164 network_create.go:116] failed to create docker network multinode-966000 192.168.76.0/24, will retry: subnet is taken
	I0703 16:48:12.464620    7164 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:48:12.465020    7164 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014d18f0}
	I0703 16:48:12.465032    7164 network_create.go:124] attempt to create docker network multinode-966000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0703 16:48:12.465093    7164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000
	I0703 16:48:12.519843    7164 network_create.go:108] docker network multinode-966000 192.168.85.0/24 created
	I0703 16:48:12.519874    7164 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-966000" container
	I0703 16:48:12.519995    7164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 16:48:12.539981    7164 cli_runner.go:164] Run: docker volume create multinode-966000 --label name.minikube.sigs.k8s.io=multinode-966000 --label created_by.minikube.sigs.k8s.io=true
	I0703 16:48:12.559776    7164 oci.go:103] Successfully created a docker volume multinode-966000
	I0703 16:48:12.559898    7164 cli_runner.go:164] Run: docker run --rm --name multinode-966000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-966000 --entrypoint /usr/bin/test -v multinode-966000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 16:48:12.806853    7164 oci.go:107] Successfully prepared a docker volume multinode-966000
	I0703 16:48:12.806886    7164 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 16:48:12.806899    7164 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 16:48:12.806995    7164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-966000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 16:54:12.414246    7164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:54:12.414373    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:12.435920    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:12.436016    7164 retry.go:31] will retry after 176.129707ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:12.614508    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:12.635972    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:12.636090    7164 retry.go:31] will retry after 222.457765ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:12.859177    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:12.881595    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:12.881706    7164 retry.go:31] will retry after 322.310025ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:13.206454    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:13.228112    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:13.228218    7164 retry.go:31] will retry after 1.085900123s: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:14.314780    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:14.336903    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:54:14.337008    7164 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:54:14.337028    7164 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:14.337089    7164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 16:54:14.337141    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:14.356413    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:14.356510    7164 retry.go:31] will retry after 364.069198ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:14.722967    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:14.744381    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:14.744480    7164 retry.go:31] will retry after 197.542037ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:14.942469    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:14.963654    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:14.963752    7164 retry.go:31] will retry after 469.517589ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:15.434488    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:15.457201    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:15.457310    7164 retry.go:31] will retry after 669.942018ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:16.129730    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:16.153066    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:54:16.153165    7164 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:54:16.153184    7164 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:16.153199    7164 start.go:128] duration metric: took 6m3.761687869s to createHost
	I0703 16:54:16.153268    7164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:54:16.153332    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:16.173278    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:16.173381    7164 retry.go:31] will retry after 340.237727ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:16.515392    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:16.538094    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:16.538190    7164 retry.go:31] will retry after 513.287042ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:17.053884    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:17.076836    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:17.076931    7164 retry.go:31] will retry after 430.010984ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:17.509359    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:17.531044    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:54:17.531141    7164 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:54:17.531158    7164 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:17.531217    7164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 16:54:17.531269    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:17.551283    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:17.551376    7164 retry.go:31] will retry after 309.77914ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:17.862073    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:17.882975    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:17.883073    7164 retry.go:31] will retry after 475.35198ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:18.358709    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:18.379075    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	I0703 16:54:18.379181    7164 retry.go:31] will retry after 358.214616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:18.738695    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000
	W0703 16:54:18.760119    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000 returned with exit code 1
	W0703 16:54:18.760220    7164 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	W0703 16:54:18.760235    7164 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-966000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-966000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:18.760270    7164 fix.go:56] duration metric: took 6m23.068281372s for fixHost
	I0703 16:54:18.760276    7164 start.go:83] releasing machines lock for "multinode-966000", held for 6m23.068321903s
	W0703 16:54:18.760353    7164 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-966000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-966000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0703 16:54:18.803443    7164 out.go:177] 
	W0703 16:54:18.824842    7164 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0703 16:54:18.824900    7164 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0703 16:54:18.824922    7164 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0703 16:54:18.845942    7164 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-966000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-966000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "f93680944aa83b59dc1dc6170ba225e4c78a565a5ab4e35015b7963948b1735e",
	        "Created": "2024-07-03T23:48:12.4825357Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (75.385073ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:54:19.097118    7666 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (792.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-966000 node delete m03: exit status 80 (160.468447ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-966000 host status: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-966000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr
multinode_test.go:428: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr": 
multinode_test.go:432: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr": 
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:436: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (42.384534ms)

                                                
                                                
** stderr ** 
	E0703 16:54:19.373944    7675 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E0703 16:54:19.374412    7675 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E0703 16:54:19.375505    7675 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E0703 16:54:19.375902    7675 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E0703 16:54:19.377094    7675 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
multinode_test.go:438: failed to run kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "f93680944aa83b59dc1dc6170ba225e4c78a565a5ab4e35015b7963948b1735e",
	        "Created": "2024-07-03T23:48:12.4825357Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (75.691457ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:54:19.476195    7677 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-966000 stop: exit status 82 (15.742305645s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	* Stopping node "multinode-966000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-966000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-966000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 status
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-966000 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "f93680944aa83b59dc1dc6170ba225e4c78a565a5ab4e35015b7963948b1735e",
	        "Created": "2024-07-03T23:48:12.4825357Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (74.902791ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:54:35.468318    7698 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (126.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-966000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0703 16:55:42.680696    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:56:00.625945    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-966000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m6.133482966s)

                                                
                                                
-- stdout --
	* [multinode-966000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-966000" primary control-plane node in "multinode-966000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* docker "multinode-966000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 16:54:35.521970    7701 out.go:291] Setting OutFile to fd 1 ...
	I0703 16:54:35.522221    7701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:54:35.522227    7701 out.go:304] Setting ErrFile to fd 2...
	I0703 16:54:35.522230    7701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:54:35.522420    7701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 16:54:35.523852    7701 out.go:298] Setting JSON to false
	I0703 16:54:35.546337    7701 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5044,"bootTime":1720045831,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 16:54:35.546435    7701 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 16:54:35.568471    7701 out.go:177] * [multinode-966000] minikube v1.33.1 on Darwin 14.5
	I0703 16:54:35.590013    7701 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 16:54:35.590071    7701 notify.go:220] Checking for updates...
	I0703 16:54:35.633028    7701 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 16:54:35.653889    7701 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 16:54:35.675167    7701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 16:54:35.696130    7701 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	I0703 16:54:35.717138    7701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 16:54:35.738724    7701 config.go:182] Loaded profile config "multinode-966000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 16:54:35.739481    7701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 16:54:35.766097    7701 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 16:54:35.766275    7701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 16:54:35.849910    7701 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:113 SystemTime:2024-07-03 23:54:35.840667884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 16:54:35.892784    7701 out.go:177] * Using the docker driver based on existing profile
	I0703 16:54:35.913733    7701 start.go:297] selected driver: docker
	I0703 16:54:35.913760    7701 start.go:901] validating driver "docker" against &{Name:multinode-966000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-966000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 16:54:35.913875    7701 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 16:54:35.914069    7701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 16:54:35.997441    7701 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:113 SystemTime:2024-07-03 23:54:35.987963832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 16:54:36.000442    7701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 16:54:36.000478    7701 cni.go:84] Creating CNI manager for ""
	I0703 16:54:36.000486    7701 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0703 16:54:36.000551    7701 start.go:340] cluster config:
	{Name:multinode-966000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 16:54:36.042817    7701 out.go:177] * Starting "multinode-966000" primary control-plane node in "multinode-966000" cluster
	I0703 16:54:36.063951    7701 cache.go:121] Beginning downloading kic base image for docker with docker
	I0703 16:54:36.085098    7701 out.go:177] * Pulling base image v0.0.44-1719972989-19184 ...
	I0703 16:54:36.127046    7701 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 16:54:36.127098    7701 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 16:54:36.127120    7701 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0703 16:54:36.127141    7701 cache.go:56] Caching tarball of preloaded images
	I0703 16:54:36.127353    7701 preload.go:173] Found /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0703 16:54:36.127372    7701 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0703 16:54:36.127517    7701 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/multinode-966000/config.json ...
	I0703 16:54:36.148541    7701 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon, skipping pull
	I0703 16:54:36.148570    7701 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 exists in daemon, skipping load
	I0703 16:54:36.148590    7701 cache.go:194] Successfully downloaded all kic artifacts
	I0703 16:54:36.148647    7701 start.go:360] acquireMachinesLock for multinode-966000: {Name:mk9a872cb80fb41099765e3cf2904deb4ec994cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 16:54:36.148750    7701 start.go:364] duration metric: took 83.923µs to acquireMachinesLock for "multinode-966000"
	I0703 16:54:36.148773    7701 start.go:96] Skipping create...Using existing machine configuration
	I0703 16:54:36.148783    7701 fix.go:54] fixHost starting: 
	I0703 16:54:36.149011    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:36.168480    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:36.168551    7701 fix.go:112] recreateIfNeeded on multinode-966000: state= err=unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:36.168569    7701 fix.go:117] machineExists: false. err=machine does not exist
	I0703 16:54:36.190146    7701 out.go:177] * docker "multinode-966000" container is missing, will recreate.
	I0703 16:54:36.232098    7701 delete.go:124] DEMOLISHING multinode-966000 ...
	I0703 16:54:36.232258    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:36.251712    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	W0703 16:54:36.251756    7701 stop.go:83] unable to get state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:36.251776    7701 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:36.252143    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:36.271665    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:36.271718    7701 delete.go:82] Unable to get host status for multinode-966000, assuming it has already been deleted: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:36.271810    7701 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-966000
	W0703 16:54:36.291155    7701 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-966000 returned with exit code 1
	I0703 16:54:36.291187    7701 kic.go:371] could not find the container multinode-966000 to remove it. will try anyways
	I0703 16:54:36.291257    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:36.310374    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	W0703 16:54:36.310414    7701 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:36.310490    7701 cli_runner.go:164] Run: docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0"
	W0703 16:54:36.329996    7701 cli_runner.go:211] docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0703 16:54:36.330039    7701 oci.go:650] error shutdown multinode-966000: docker exec --privileged -t multinode-966000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:37.331172    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:37.353568    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:37.353610    7701 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:37.353619    7701 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:54:37.353655    7701 retry.go:31] will retry after 686.596947ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:38.040592    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:38.061171    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:38.061216    7701 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:38.061225    7701 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:54:38.061250    7701 retry.go:31] will retry after 758.615894ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:38.820512    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:38.841490    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:38.841533    7701 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:38.841546    7701 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:54:38.841571    7701 retry.go:31] will retry after 1.542919368s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:40.386376    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:40.408419    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:40.408462    7701 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:40.408471    7701 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:54:40.408497    7701 retry.go:31] will retry after 892.720918ms: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:41.303554    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:41.325195    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:41.325236    7701 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:41.325245    7701 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:54:41.325272    7701 retry.go:31] will retry after 2.011879295s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:43.339642    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:43.361806    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:43.361849    7701 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:43.361859    7701 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:54:43.361883    7701 retry.go:31] will retry after 1.945086714s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:45.308183    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:45.330022    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:45.330067    7701 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:45.330075    7701 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:54:45.330099    7701 retry.go:31] will retry after 6.600827445s: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:51.932185    7701 cli_runner.go:164] Run: docker container inspect multinode-966000 --format={{.State.Status}}
	W0703 16:54:51.953795    7701 cli_runner.go:211] docker container inspect multinode-966000 --format={{.State.Status}} returned with exit code 1
	I0703 16:54:51.953840    7701 oci.go:662] temporary error verifying shutdown: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	I0703 16:54:51.953850    7701 oci.go:664] temporary error: container multinode-966000 status is  but expect it to be exited
	I0703 16:54:51.953882    7701 oci.go:88] couldn't shut down multinode-966000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000
	 
	I0703 16:54:51.953958    7701 cli_runner.go:164] Run: docker rm -f -v multinode-966000
	I0703 16:54:51.973928    7701 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-966000
	W0703 16:54:51.993202    7701 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-966000 returned with exit code 1
	I0703 16:54:51.993320    7701 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:54:52.012926    7701 cli_runner.go:164] Run: docker network rm multinode-966000
	I0703 16:54:52.087949    7701 fix.go:124] Sleeping 1 second for extra luck!
	I0703 16:54:53.090116    7701 start.go:125] createHost starting for "" (driver="docker")
	I0703 16:54:53.112361    7701 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0703 16:54:53.112561    7701 start.go:159] libmachine.API.Create for "multinode-966000" (driver="docker")
	I0703 16:54:53.112614    7701 client.go:168] LocalClient.Create starting
	I0703 16:54:53.112839    7701 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 16:54:53.112990    7701 main.go:141] libmachine: Decoding PEM data...
	I0703 16:54:53.113034    7701 main.go:141] libmachine: Parsing certificate...
	I0703 16:54:53.113132    7701 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 16:54:53.113212    7701 main.go:141] libmachine: Decoding PEM data...
	I0703 16:54:53.113228    7701 main.go:141] libmachine: Parsing certificate...
	I0703 16:54:53.114111    7701 cli_runner.go:164] Run: docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 16:54:53.135633    7701 cli_runner.go:211] docker network inspect multinode-966000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 16:54:53.135723    7701 network_create.go:284] running [docker network inspect multinode-966000] to gather additional debugging logs...
	I0703 16:54:53.135741    7701 cli_runner.go:164] Run: docker network inspect multinode-966000
	W0703 16:54:53.155960    7701 cli_runner.go:211] docker network inspect multinode-966000 returned with exit code 1
	I0703 16:54:53.155988    7701 network_create.go:287] error running [docker network inspect multinode-966000]: docker network inspect multinode-966000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-966000 not found
	I0703 16:54:53.156002    7701 network_create.go:289] output of [docker network inspect multinode-966000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-966000 not found
	
	** /stderr **
	I0703 16:54:53.156142    7701 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 16:54:53.178429    7701 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:54:53.180065    7701 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 16:54:53.180445    7701 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017c4760}
	I0703 16:54:53.180462    7701 network_create.go:124] attempt to create docker network multinode-966000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0703 16:54:53.180532    7701 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-966000 multinode-966000
	I0703 16:54:53.236427    7701 network_create.go:108] docker network multinode-966000 192.168.67.0/24 created
	I0703 16:54:53.236466    7701 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-966000" container
	I0703 16:54:53.236580    7701 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 16:54:53.257128    7701 cli_runner.go:164] Run: docker volume create multinode-966000 --label name.minikube.sigs.k8s.io=multinode-966000 --label created_by.minikube.sigs.k8s.io=true
	I0703 16:54:53.276643    7701 oci.go:103] Successfully created a docker volume multinode-966000
	I0703 16:54:53.276770    7701 cli_runner.go:164] Run: docker run --rm --name multinode-966000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-966000 --entrypoint /usr/bin/test -v multinode-966000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 16:54:53.534576    7701 oci.go:107] Successfully prepared a docker volume multinode-966000
	I0703 16:54:53.534630    7701 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 16:54:53.534647    7701 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 16:54:53.534746    7701 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-966000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-966000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-966000
helpers_test.go:235: (dbg) docker inspect multinode-966000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-966000",
	        "Id": "7199b61dce758a1cd5b06e097125e1c810563581688a7fa5846fff886480c0e0",
	        "Created": "2024-07-03T23:54:53.197984019Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-966000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-966000 -n multinode-966000: exit status 7 (79.035263ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 16:56:41.708706    7819 status.go:131] status error: host: state: unknown state "multinode-966000": docker container inspect multinode-966000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-966000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-966000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (126.24s)

                                                
                                    
x
+
TestScheduledStopUnix (300.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-170000 --memory=2048 --driver=docker 
E0703 17:00:42.681488    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:01:00.625355    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 17:02:05.732429    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-170000 --memory=2048 --driver=docker : signal: killed (5m0.004136849s)

                                                
                                                
-- stdout --
	* [scheduled-stop-170000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-170000" primary control-plane node in "scheduled-stop-170000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-170000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-170000" primary control-plane node in "scheduled-stop-170000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-03 17:03:39.283023 -0700 PDT m=+4655.166055636
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-170000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-170000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-170000",
	        "Id": "678c2f3bcc7806a11483b6a43619d8cbf3827a5e1af39fc10f79fb1206362e72",
	        "Created": "2024-07-03T23:58:40.119386692Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-170000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-170000 -n scheduled-stop-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-170000 -n scheduled-stop-170000: exit status 7 (79.649291ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 17:03:39.386275    8294 status.go:131] status error: host: state: unknown state "scheduled-stop-170000": docker container inspect scheduled-stop-170000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-170000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-170000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-170000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-170000
--- FAIL: TestScheduledStopUnix (300.56s)

                                                
                                    
x
+
TestSkaffold (300.57s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe191128149 version
skaffold_test.go:59: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe191128149 version: (1.455558653s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-439000 --memory=2600 --driver=docker 
E0703 17:05:42.683733    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:06:00.628914    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 17:07:23.717792    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-439000 --memory=2600 --driver=docker : signal: killed (4m56.341925119s)

                                                
                                                
-- stdout --
	* [skaffold-439000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-439000" primary control-plane node in "skaffold-439000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-439000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-439000" primary control-plane node in "skaffold-439000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-07-03 17:08:39.882316 -0700 PDT m=+4955.729135225
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-439000
helpers_test.go:235: (dbg) docker inspect skaffold-439000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-439000",
	        "Id": "5e92ec2666b56bf0c621bf1705d14f1ad1db35b3392977351dfba7ae67d629ef",
	        "Created": "2024-07-04T00:03:44.327423525Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-439000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-439000 -n skaffold-439000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-439000 -n skaffold-439000: exit status 7 (75.668396ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 17:08:39.981941    8447 status.go:131] status error: host: state: unknown state "skaffold-439000": docker container inspect skaffold-439000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-439000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-439000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-439000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-439000
--- FAIL: TestSkaffold (300.57s)

                                                
                                    
x
+
TestInsufficientStorage (300.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-349000 --memory=2048 --output=json --wait=true --driver=docker 
E0703 17:10:42.719173    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:11:00.663244    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-349000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.003713255s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"85623410-f572-47a0-97cd-6e0002ad88cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-349000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5e09a62-95ed-4683-a746-fe101297a83c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18998"}}
	{"specversion":"1.0","id":"cdbacbdc-6ead-47fd-83ef-6d22edd7149c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig"}}
	{"specversion":"1.0","id":"d8253c44-4e65-4eca-b419-5efff315c783","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3671aff2-1dad-4cfd-ab7b-c30610d1f31a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"daf5f703-fb5e-4918-99a0-434f5314a830","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube"}}
	{"specversion":"1.0","id":"84c909b5-1222-4b1b-9c7d-cd4fa1a68ad1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a68eb2ad-ba6c-4c5c-b5ab-987ca90b3476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bff52833-d733-4c40-bb13-dbee6390204f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"33cbca18-d58e-45fe-adcd-1268d9ad3b59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e7e3b87e-1f28-4f85-948a-9ce68091f95e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"e81d28d3-9b90-48ec-a1c5-4eec34bc511c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-349000\" primary control-plane node in \"insufficient-storage-349000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9669c8f5-66cc-4d18-bc29-4fdc5826e051","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1719972989-19184 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e1193c2-d8cf-458a-9634-b34e2cb51341","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-349000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-349000 --output=json --layout=cluster: context deadline exceeded (730ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-349000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-349000
--- FAIL: TestInsufficientStorage (300.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (762.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker : exit status 52 (12m32.037349594s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-022000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "kubernetes-upgrade-022000" primary control-plane node in "kubernetes-upgrade-022000" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-022000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 17:26:14.260828    9534 out.go:291] Setting OutFile to fd 1 ...
	I0703 17:26:14.261087    9534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 17:26:14.261092    9534 out.go:304] Setting ErrFile to fd 2...
	I0703 17:26:14.261096    9534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 17:26:14.261269    9534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 17:26:14.262746    9534 out.go:298] Setting JSON to false
	I0703 17:26:14.285308    9534 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6943,"bootTime":1720045831,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 17:26:14.285418    9534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 17:26:14.307559    9534 out.go:177] * [kubernetes-upgrade-022000] minikube v1.33.1 on Darwin 14.5
	I0703 17:26:14.329160    9534 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 17:26:14.329199    9534 notify.go:220] Checking for updates...
	I0703 17:26:14.373158    9534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 17:26:14.395156    9534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 17:26:14.417028    9534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 17:26:14.438085    9534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	I0703 17:26:14.459955    9534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 17:26:14.481739    9534 config.go:182] Loaded profile config "missing-upgrade-772000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0703 17:26:14.481902    9534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 17:26:14.508362    9534 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 17:26:14.508532    9534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 17:26:14.591715    9534 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:false NGoroutines:193 SystemTime:2024-07-04 00:26:14.582715155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 17:26:14.634821    9534 out.go:177] * Using the docker driver based on user configuration
	I0703 17:26:14.655526    9534 start.go:297] selected driver: docker
	I0703 17:26:14.655604    9534 start.go:901] validating driver "docker" against <nil>
	I0703 17:26:14.655620    9534 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 17:26:14.659852    9534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 17:26:14.738845    9534 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:false NGoroutines:193 SystemTime:2024-07-04 00:26:14.729627045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 17:26:14.739032    9534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 17:26:14.739226    9534 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 17:26:14.761199    9534 out.go:177] * Using Docker Desktop driver with root privileges
	I0703 17:26:14.782855    9534 cni.go:84] Creating CNI manager for ""
	I0703 17:26:14.782894    9534 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0703 17:26:14.782989    9534 start.go:340] cluster config:
	{Name:kubernetes-upgrade-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 17:26:14.827124    9534 out.go:177] * Starting "kubernetes-upgrade-022000" primary control-plane node in "kubernetes-upgrade-022000" cluster
	I0703 17:26:14.849002    9534 cache.go:121] Beginning downloading kic base image for docker with docker
	I0703 17:26:14.871006    9534 out.go:177] * Pulling base image v0.0.44-1719972989-19184 ...
	I0703 17:26:14.918779    9534 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0703 17:26:14.918846    9534 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 17:26:14.918856    9534 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0703 17:26:14.918887    9534 cache.go:56] Caching tarball of preloaded images
	I0703 17:26:14.919115    9534 preload.go:173] Found /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0703 17:26:14.919134    9534 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0703 17:26:14.920018    9534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/kubernetes-upgrade-022000/config.json ...
	I0703 17:26:14.920230    9534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/kubernetes-upgrade-022000/config.json: {Name:mkf6885c46d9d26afedcbdc4d19f4f4e50b6a63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 17:26:14.941152    9534 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon, skipping pull
	I0703 17:26:14.941181    9534 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 exists in daemon, skipping load
	I0703 17:26:14.941201    9534 cache.go:194] Successfully downloaded all kic artifacts
	I0703 17:26:14.941238    9534 start.go:360] acquireMachinesLock for kubernetes-upgrade-022000: {Name:mk17cc4e8909c44cb7161be02d4a0d3cf333011d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 17:26:14.941403    9534 start.go:364] duration metric: took 153.625µs to acquireMachinesLock for "kubernetes-upgrade-022000"
	I0703 17:26:14.941430    9534 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-022000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0703 17:26:14.941512    9534 start.go:125] createHost starting for "" (driver="docker")
	I0703 17:26:14.963651    9534 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0703 17:26:14.964003    9534 start.go:159] libmachine.API.Create for "kubernetes-upgrade-022000" (driver="docker")
	I0703 17:26:14.964050    9534 client.go:168] LocalClient.Create starting
	I0703 17:26:14.964257    9534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 17:26:14.964360    9534 main.go:141] libmachine: Decoding PEM data...
	I0703 17:26:14.964396    9534 main.go:141] libmachine: Parsing certificate...
	I0703 17:26:14.964480    9534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 17:26:14.964561    9534 main.go:141] libmachine: Decoding PEM data...
	I0703 17:26:14.964587    9534 main.go:141] libmachine: Parsing certificate...
	I0703 17:26:14.965454    9534 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 17:26:14.985717    9534 cli_runner.go:211] docker network inspect kubernetes-upgrade-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 17:26:14.985834    9534 network_create.go:284] running [docker network inspect kubernetes-upgrade-022000] to gather additional debugging logs...
	I0703 17:26:14.985852    9534 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-022000
	W0703 17:26:15.005202    9534 cli_runner.go:211] docker network inspect kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:26:15.005234    9534 network_create.go:287] error running [docker network inspect kubernetes-upgrade-022000]: docker network inspect kubernetes-upgrade-022000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-022000 not found
	I0703 17:26:15.005255    9534 network_create.go:289] output of [docker network inspect kubernetes-upgrade-022000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-022000 not found
	
	** /stderr **
	I0703 17:26:15.005385    9534 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 17:26:15.026797    9534 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:26:15.028338    9534 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:26:15.028884    9534 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00147b050}
	I0703 17:26:15.028902    9534 network_create.go:124] attempt to create docker network kubernetes-upgrade-022000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0703 17:26:15.029052    9534 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 kubernetes-upgrade-022000
	W0703 17:26:15.048562    9534 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 kubernetes-upgrade-022000 returned with exit code 1
	W0703 17:26:15.048609    9534 network_create.go:149] failed to create docker network kubernetes-upgrade-022000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 kubernetes-upgrade-022000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0703 17:26:15.048626    9534 network_create.go:116] failed to create docker network kubernetes-upgrade-022000 192.168.67.0/24, will retry: subnet is taken
	I0703 17:26:15.050239    9534 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:26:15.050619    9534 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015bf270}
	I0703 17:26:15.050632    9534 network_create.go:124] attempt to create docker network kubernetes-upgrade-022000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0703 17:26:15.050702    9534 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 kubernetes-upgrade-022000
	W0703 17:26:15.070116    9534 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 kubernetes-upgrade-022000 returned with exit code 1
	W0703 17:26:15.070168    9534 network_create.go:149] failed to create docker network kubernetes-upgrade-022000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 kubernetes-upgrade-022000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0703 17:26:15.070185    9534 network_create.go:116] failed to create docker network kubernetes-upgrade-022000 192.168.76.0/24, will retry: subnet is taken
	I0703 17:26:15.071782    9534 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:26:15.072152    9534 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013d1d40}
	I0703 17:26:15.072166    9534 network_create.go:124] attempt to create docker network kubernetes-upgrade-022000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0703 17:26:15.072239    9534 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 kubernetes-upgrade-022000
	I0703 17:26:15.127603    9534 network_create.go:108] docker network kubernetes-upgrade-022000 192.168.85.0/24 created
	I0703 17:26:15.127638    9534 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-022000" container
	I0703 17:26:15.127766    9534 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 17:26:15.150055    9534 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-022000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 --label created_by.minikube.sigs.k8s.io=true
	I0703 17:26:15.170395    9534 oci.go:103] Successfully created a docker volume kubernetes-upgrade-022000
	I0703 17:26:15.170526    9534 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-022000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 --entrypoint /usr/bin/test -v kubernetes-upgrade-022000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 17:26:15.491097    9534 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-022000
	I0703 17:26:15.491145    9534 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0703 17:26:15.491163    9534 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 17:26:15.491289    9534 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-022000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 17:32:14.967540    9534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 17:32:14.967680    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:14.988800    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:14.988906    9534 retry.go:31] will retry after 159.755035ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:15.149874    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:15.169412    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:15.169525    9534 retry.go:31] will retry after 494.655908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:15.666606    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:15.689050    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:15.689167    9534 retry.go:31] will retry after 457.454361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:16.147254    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:16.169129    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:16.169225    9534 retry.go:31] will retry after 484.2784ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:16.654209    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:16.675473    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	W0703 17:32:16.675594    9534 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	
	W0703 17:32:16.675613    9534 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:16.675677    9534 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 17:32:16.675746    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:16.695937    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:16.696035    9534 retry.go:31] will retry after 190.469052ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:16.888469    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:16.911442    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:16.911536    9534 retry.go:31] will retry after 382.939372ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:17.294772    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:17.315350    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:17.315442    9534 retry.go:31] will retry after 473.796266ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:17.789953    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:32:17.812343    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	W0703 17:32:17.812441    9534 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	
	W0703 17:32:17.812457    9534 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:17.812477    9534 start.go:128] duration metric: took 6m2.869245398s to createHost
	I0703 17:32:17.812485    9534 start.go:83] releasing machines lock for "kubernetes-upgrade-022000", held for 6m2.869366425s
	W0703 17:32:17.812499    9534 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0703 17:32:17.812951    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:17.833063    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:17.833113    9534 delete.go:82] Unable to get host status for kubernetes-upgrade-022000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	W0703 17:32:17.833189    9534 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0703 17:32:17.833201    9534 start.go:728] Will try again in 5 seconds ...
	I0703 17:32:22.834556    9534 start.go:360] acquireMachinesLock for kubernetes-upgrade-022000: {Name:mk17cc4e8909c44cb7161be02d4a0d3cf333011d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 17:32:22.835654    9534 start.go:364] duration metric: took 1.032171ms to acquireMachinesLock for "kubernetes-upgrade-022000"
	I0703 17:32:22.835746    9534 start.go:96] Skipping create...Using existing machine configuration
	I0703 17:32:22.835763    9534 fix.go:54] fixHost starting: 
	I0703 17:32:22.836275    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:22.856500    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:22.856557    9534 fix.go:112] recreateIfNeeded on kubernetes-upgrade-022000: state= err=unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:22.856585    9534 fix.go:117] machineExists: false. err=machine does not exist
	I0703 17:32:22.878310    9534 out.go:177] * docker "kubernetes-upgrade-022000" container is missing, will recreate.
	I0703 17:32:22.921206    9534 delete.go:124] DEMOLISHING kubernetes-upgrade-022000 ...
	I0703 17:32:22.921382    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:22.942194    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	W0703 17:32:22.942248    9534 stop.go:83] unable to get state: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:22.942265    9534 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:22.942657    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:22.961890    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:22.961950    9534 delete.go:82] Unable to get host status for kubernetes-upgrade-022000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:22.962050    9534 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-022000
	W0703 17:32:22.981389    9534 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:22.981436    9534 kic.go:371] could not find the container kubernetes-upgrade-022000 to remove it. will try anyways
	I0703 17:32:22.981516    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:23.000588    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	W0703 17:32:23.000645    9534 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:23.000741    9534 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-022000 /bin/bash -c "sudo init 0"
	W0703 17:32:23.020369    9534 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-022000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0703 17:32:23.020398    9534 oci.go:650] error shutdown kubernetes-upgrade-022000: docker exec --privileged -t kubernetes-upgrade-022000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:24.021752    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:24.042658    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:24.042712    9534 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:24.042727    9534 oci.go:664] temporary error: container kubernetes-upgrade-022000 status is  but expect it to be exited
	I0703 17:32:24.042753    9534 retry.go:31] will retry after 723.130912ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:24.767181    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:24.788856    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:24.788909    9534 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:24.788921    9534 oci.go:664] temporary error: container kubernetes-upgrade-022000 status is  but expect it to be exited
	I0703 17:32:24.788942    9534 retry.go:31] will retry after 548.318703ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:25.339288    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:25.361356    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:25.361412    9534 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:25.361423    9534 oci.go:664] temporary error: container kubernetes-upgrade-022000 status is  but expect it to be exited
	I0703 17:32:25.361446    9534 retry.go:31] will retry after 809.102228ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:26.170798    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:26.192432    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:26.192483    9534 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:26.192493    9534 oci.go:664] temporary error: container kubernetes-upgrade-022000 status is  but expect it to be exited
	I0703 17:32:26.192517    9534 retry.go:31] will retry after 2.485044062s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:28.677746    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:28.698522    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:28.698566    9534 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:28.698581    9534 oci.go:664] temporary error: container kubernetes-upgrade-022000 status is  but expect it to be exited
	I0703 17:32:28.698626    9534 retry.go:31] will retry after 2.488686331s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:31.187882    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:31.209129    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:31.209176    9534 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:31.209185    9534 oci.go:664] temporary error: container kubernetes-upgrade-022000 status is  but expect it to be exited
	I0703 17:32:31.209216    9534 retry.go:31] will retry after 4.386214517s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:35.596255    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:35.617960    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:35.618010    9534 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:35.618020    9534 oci.go:664] temporary error: container kubernetes-upgrade-022000 status is  but expect it to be exited
	I0703 17:32:35.618044    9534 retry.go:31] will retry after 3.585898066s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:39.205667    9534 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}
	W0703 17:32:39.227890    9534 cli_runner.go:211] docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}} returned with exit code 1
	I0703 17:32:39.227939    9534 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:32:39.227950    9534 oci.go:664] temporary error: container kubernetes-upgrade-022000 status is  but expect it to be exited
	I0703 17:32:39.227984    9534 oci.go:88] couldn't shut down kubernetes-upgrade-022000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	 
	I0703 17:32:39.228069    9534 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-022000
	I0703 17:32:39.249229    9534 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-022000
	W0703 17:32:39.290663    9534 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:39.290770    9534 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 17:32:39.310499    9534 cli_runner.go:164] Run: docker network rm kubernetes-upgrade-022000
	I0703 17:32:39.385549    9534 fix.go:124] Sleeping 1 second for extra luck!
	I0703 17:32:40.387754    9534 start.go:125] createHost starting for "" (driver="docker")
	I0703 17:32:40.410832    9534 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0703 17:32:40.410980    9534 start.go:159] libmachine.API.Create for "kubernetes-upgrade-022000" (driver="docker")
	I0703 17:32:40.411020    9534 client.go:168] LocalClient.Create starting
	I0703 17:32:40.411270    9534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/ca.pem
	I0703 17:32:40.411373    9534 main.go:141] libmachine: Decoding PEM data...
	I0703 17:32:40.411408    9534 main.go:141] libmachine: Parsing certificate...
	I0703 17:32:40.411491    9534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18998-1161/.minikube/certs/cert.pem
	I0703 17:32:40.411576    9534 main.go:141] libmachine: Decoding PEM data...
	I0703 17:32:40.411600    9534 main.go:141] libmachine: Parsing certificate...
	I0703 17:32:40.412459    9534 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 17:32:40.433748    9534 cli_runner.go:211] docker network inspect kubernetes-upgrade-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 17:32:40.433853    9534 network_create.go:284] running [docker network inspect kubernetes-upgrade-022000] to gather additional debugging logs...
	I0703 17:32:40.433870    9534 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-022000
	W0703 17:32:40.454273    9534 cli_runner.go:211] docker network inspect kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:32:40.454303    9534 network_create.go:287] error running [docker network inspect kubernetes-upgrade-022000]: docker network inspect kubernetes-upgrade-022000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-022000 not found
	I0703 17:32:40.454318    9534 network_create.go:289] output of [docker network inspect kubernetes-upgrade-022000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-022000 not found
	
	** /stderr **
	I0703 17:32:40.454484    9534 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 17:32:40.475616    9534 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:32:40.477188    9534 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:32:40.478772    9534 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:32:40.480345    9534 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:32:40.481638    9534 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0703 17:32:40.481975    9534 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013d0e90}
	I0703 17:32:40.481988    9534 network_create.go:124] attempt to create docker network kubernetes-upgrade-022000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0703 17:32:40.482056    9534 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 kubernetes-upgrade-022000
	I0703 17:32:40.537815    9534 network_create.go:108] docker network kubernetes-upgrade-022000 192.168.94.0/24 created
	I0703 17:32:40.537852    9534 kic.go:121] calculated static IP "192.168.94.2" for the "kubernetes-upgrade-022000" container
	I0703 17:32:40.537969    9534 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 17:32:40.560784    9534 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-022000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 --label created_by.minikube.sigs.k8s.io=true
	I0703 17:32:40.580078    9534 oci.go:103] Successfully created a docker volume kubernetes-upgrade-022000
	I0703 17:32:40.580203    9534 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-022000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-022000 --entrypoint /usr/bin/test -v kubernetes-upgrade-022000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -d /var/lib
	I0703 17:32:40.827061    9534 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-022000
	I0703 17:32:40.827112    9534 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0703 17:32:40.827127    9534 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 17:32:40.827250    9534 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-022000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 17:38:40.367230    9534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 17:38:40.367358    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:40.389122    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:40.389237    9534 retry.go:31] will retry after 210.625916ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:40.601018    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:40.621916    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:40.622037    9534 retry.go:31] will retry after 381.675871ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:41.006072    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:41.027736    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:41.027841    9534 retry.go:31] will retry after 811.038825ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:41.839851    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:41.861358    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	W0703 17:38:41.861479    9534 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	
	W0703 17:38:41.861499    9534 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:41.861559    9534 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 17:38:41.861614    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:41.881665    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:41.881766    9534 retry.go:31] will retry after 324.669869ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:42.207805    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:42.229957    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:42.230058    9534 retry.go:31] will retry after 323.179812ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:42.553916    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:42.576213    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:42.576316    9534 retry.go:31] will retry after 403.971112ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:42.980770    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:43.002971    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:43.003081    9534 retry.go:31] will retry after 584.343797ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:43.587662    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:43.608198    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	W0703 17:38:43.608317    9534 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	
	W0703 17:38:43.608334    9534 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:43.608340    9534 start.go:128] duration metric: took 6m3.266166633s to createHost
	I0703 17:38:43.608407    9534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 17:38:43.608465    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:43.627622    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:43.627717    9534 retry.go:31] will retry after 198.4187ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:43.827369    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:43.849961    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:43.850062    9534 retry.go:31] will retry after 398.800097ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:44.250872    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:44.272837    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:44.272930    9534 retry.go:31] will retry after 707.658809ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:44.982013    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:45.004811    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	W0703 17:38:45.004907    9534 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	
	W0703 17:38:45.004926    9534 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:45.004988    9534 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 17:38:45.005055    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:45.024396    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:45.024490    9534 retry.go:31] will retry after 128.424642ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:45.155278    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:45.176267    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:45.176366    9534 retry.go:31] will retry after 367.502283ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:45.545615    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:45.567955    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	I0703 17:38:45.568049    9534 retry.go:31] will retry after 437.700023ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:46.008091    9534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000
	W0703 17:38:46.030929    9534 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000 returned with exit code 1
	W0703 17:38:46.031045    9534 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	
	W0703 17:38:46.031066    9534 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-022000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	I0703 17:38:46.031075    9534 fix.go:56] duration metric: took 6m23.240894531s for fixHost
	I0703 17:38:46.031083    9534 start.go:83] releasing machines lock for "kubernetes-upgrade-022000", held for 6m23.240946549s
	W0703 17:38:46.031159    9534 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-022000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-022000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0703 17:38:46.074570    9534 out.go:177] 
	W0703 17:38:46.096435    9534 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0703 17:38:46.096459    9534 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0703 17:38:46.096503    9534 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0703 17:38:46.117567    9534 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker : exit status 52
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-022000
version_upgrade_test.go:227: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-022000: exit status 82 (9.991382514s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-022000"  ...
	* Stopping node "kubernetes-upgrade-022000"  ...
	* Stopping node "kubernetes-upgrade-022000"  ...
	* Stopping node "kubernetes-upgrade-022000"  ...
	* Stopping node "kubernetes-upgrade-022000"  ...
	* Stopping node "kubernetes-upgrade-022000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect kubernetes-upgrade-022000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:229: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-022000 failed: exit status 82
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-03 17:38:56.203452 -0700 PDT m=+6772.062294571
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-022000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-022000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "kubernetes-upgrade-022000",
	        "Id": "0ac8a63c51d75697047eecd17b3c5a31ac9bdeba0163f62ed8503d895e8347f9",
	        "Created": "2024-07-04T00:32:40.500090099Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "kubernetes-upgrade-022000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-022000 -n kubernetes-upgrade-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-022000 -n kubernetes-upgrade-022000: exit status 7 (77.473501ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 17:38:56.302757   10342 status.go:131] status error: host: state: unknown state "kubernetes-upgrade-022000": docker container inspect kubernetes-upgrade-022000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-022000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-022000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-022000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-022000
--- FAIL: TestKubernetesUpgrade (762.60s)

                                                
                                    
x
+
TestMissingContainerUpgrade (7201.296s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.2059514599 start -p missing-upgrade-772000 --memory=2200 --driver=docker 
E0703 17:15:42.720918    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:16:00.665519    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 17:18:45.774501    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:20:42.723700    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:21:00.666905    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 17:24:03.750233    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 17:25:42.751061    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:26:00.696305    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.2059514599 start -p missing-upgrade-772000 --memory=2200 --driver=docker : exit status 52 (12m59.031082241s)

                                                
                                                
-- stdout --
	* [missing-upgrade-772000] minikube v1.26.0 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	* minikube 1.33.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.33.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node missing-upgrade-772000 in cluster missing-upgrade-772000
	* Pulling base image ...
	* Downloading Kubernetes v1.24.1 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-772000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB [>____] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 6.29 MiB / 386.00 MiB [>_____] 1.63% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 20.26 MiB / 386.00 MiB [>____] 5.25% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 45.41 MiB p/s     > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 45.41 MiB p/s     > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 45.41 MiB p/s     > gcr.io/k8s-minikube/kicbase: 29.16 MiB / 386.00 MiB  7.55% 42.68 MiB p/s     > gcr.io/k8s-minikube/kicbase: 38.28 MiB / 386.00 MiB  9.92% 42.68 MiB p/s     > gcr.io/k8s-minikube/kicbase: 47.00 MiB / 386.00 MiB  12.18% 42.68 MiB p/s    > gcr.io/k8s-minikube/kicbase: 52.09 MiB / 386.00 MiB  13.50% 42.39 MiB p/s    > gcr.io/k8s-minikube/kicbase: 56.44 MiB / 386.00 MiB  14.62% 42.39 MiB p/s    > gcr.io/k8s-minikube/kicbase: 65.13 MiB / 386.00 MiB  16.87% 42.39 MiB p/s    > gcr.io/k8s-minikube/kicbase: 71.66 MiB / 386.00 MiB  18.57% 4
1.76 MiB p/s    > gcr.io/k8s-minikube/kicbase: 74.46 MiB / 386.00 MiB  19.29% 41.76 MiB p/s    > gcr.io/k8s-minikube/kicbase: 83.52 MiB / 386.00 MiB  21.64% 41.76 MiB p/s    > gcr.io/k8s-minikube/kicbase: 93.02 MiB / 386.00 MiB  24.10% 41.37 MiB p/s    > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 41.37 MiB p/s    > gcr.io/k8s-minikube/kicbase: 101.28 MiB / 386.00 MiB  26.24% 41.37 MiB p/    > gcr.io/k8s-minikube/kicbase: 110.67 MiB / 386.00 MiB  28.67% 40.59 MiB p/    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 40.59 MiB p/    > gcr.io/k8s-minikube/kicbase: 123.32 MiB / 386.00 MiB  31.95% 40.59 MiB p/    > gcr.io/k8s-minikube/kicbase: 133.51 MiB / 386.00 MiB  34.59% 40.43 MiB p/    > gcr.io/k8s-minikube/kicbase: 143.79 MiB / 386.00 MiB  37.25% 40.43 MiB p/    > gcr.io/k8s-minikube/kicbase: 154.01 MiB / 386.00 MiB  39.90% 40.43 MiB p/    > gcr.io/k8s-minikube/kicbase: 163.68 MiB / 386.00 MiB  42.41% 41.07 MiB p/    > gcr.io/k8s-minikube/kicbase: 173.08 MiB / 386.00 MiB  44.8
4% 41.07 MiB p/    > gcr.io/k8s-minikube/kicbase: 183.18 MiB / 386.00 MiB  47.46% 41.07 MiB p/    > gcr.io/k8s-minikube/kicbase: 192.50 MiB / 386.00 MiB  49.87% 41.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 201.92 MiB / 386.00 MiB  52.31% 41.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 211.29 MiB / 386.00 MiB  54.74% 41.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 220.67 MiB / 386.00 MiB  57.17% 41.87 MiB p/    > gcr.io/k8s-minikube/kicbase: 229.78 MiB / 386.00 MiB  59.53% 41.87 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.41 MiB / 386.00 MiB  60.99% 41.87 MiB p/    > gcr.io/k8s-minikube/kicbase: 242.87 MiB / 386.00 MiB  62.92% 41.55 MiB p/    > gcr.io/k8s-minikube/kicbase: 257.55 MiB / 386.00 MiB  66.72% 41.55 MiB p/    > gcr.io/k8s-minikube/kicbase: 271.89 MiB / 386.00 MiB  70.44% 41.55 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.29 MiB / 386.00 MiB  72.87% 43.00 MiB p/    > gcr.io/k8s-minikube/kicbase: 290.65 MiB / 386.00 MiB  75.30% 43.00 MiB p/    > gcr.io/k8s-minikube/kicbase: 305.32 MiB / 386.00 MiB  7
9.10% 43.00 MiB p/    > gcr.io/k8s-minikube/kicbase: 315.77 MiB / 386.00 MiB  81.81% 43.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 323.52 MiB / 386.00 MiB  83.81% 43.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 43.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 331.57 MiB / 386.00 MiB  85.90% 42.80 MiB p/    > gcr.io/k8s-minikube/kicbase: 340.47 MiB / 386.00 MiB  88.20% 42.80 MiB p/    > gcr.io/k8s-minikube/kicbase: 352.81 MiB / 386.00 MiB  91.40% 42.80 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.56 MiB / 386.00 MiB  92.89% 42.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 365.14 MiB / 386.00 MiB  94.60% 42.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 380.75 MiB / 386.00 MiB  98.64% 42.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 43.11 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 43.11 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 43.11 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB
99.99% 40.33 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 40.33 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 40.33 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 37.73 MiB p    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 37.73 MiB p    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 37.73 MiB p    > gcr.io/k8s-minikube/kicbase: 386.00 MiB / 386.00 MiB  100.00% 34.31 MiB p    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_______________
____________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [____________
_______________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_________
__________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [________________________] ?% ? p/s 7.8s! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-772000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
I0703 17:26:50.105776    1695 retry.go:31] will retry after 1.20541004s: exit status 52
version_upgrade_test.go:309: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.2059514599 start -p missing-upgrade-772000 --memory=2200 --driver=docker 
E0703 17:30:42.751757    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:31:00.697706    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 17:35:25.806724    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:35:42.752920    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:36:00.699181    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.2059514599 start -p missing-upgrade-772000 --memory=2200 --driver=docker : exit status 52 (12m53.160126275s)

                                                
                                                
-- stdout --
	* [missing-upgrade-772000] minikube v1.26.0 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-772000 in cluster missing-upgrade-772000
	* Pulling base image ...
	* docker "missing-upgrade-772000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-772000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-772000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
I0703 17:39:44.428941    1695 retry.go:31] will retry after 759.276428ms: exit status 52
version_upgrade_test.go:309: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.2059514599 start -p missing-upgrade-772000 --memory=2200 --driver=docker 
E0703 17:40:42.705090    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:40:43.707881    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 17:41:00.650747    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 17:45:42.701435    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 17:46:00.647094    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestMissingContainerUpgrade (32m18s)
	TestNetworkPlugins (32m23s)
	TestStoppedBinaryUpgrade (7m7s)
	TestStoppedBinaryUpgrade/Upgrade (7m6s)

                                                
                                                
goroutine 2332 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006b01a0, 0xc0009fdbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000a8a060, {0x137aa940, 0x2a, 0x2a}, {0xc0000061c0?, 0xc000880060?, 0x137cc5c0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0001a6f00)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0001a6f00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 25 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00066e000)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 681 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x5aac6608, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001e51380?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc001e51380)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc001e51380)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00169b620)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00169b620)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00200c870, {0x1263d530, 0xc00169b620})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc00200c870)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0013b24e0?, 0xc0013b24e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 678
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 896 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001f70410, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x1219da00?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0017c2d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001f70440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00137b870, {0x12628be0, 0xc0012d9b30}, 0x1, 0xc000182240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00137b870, 0x3b9aca00, 0x0, 0x1, 0xc000182240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 911
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1309 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc001b057a0)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1331
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 597 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006b1860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006b1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0006b1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc0006b1860, 0x1261def8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2190 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013b2340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013b2340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013b2340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013b2340, 0xc0019fc400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 599 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006b1ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006b1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc0006b1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc0006b1ba0, 0x1261df08)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2158 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e0340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e0340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0013e0340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0013e0340, 0x1261e020)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1201 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019c0160, 0xc0017655c0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1200
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 598 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006b1a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006b1a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc0006b1a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0006b1a00, 0x1261def0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2184 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bf91e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bf91e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bf91e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bf91e0, 0xc0019fc100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1242 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b26840, 0xc001b166c0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 788
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1076 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001413080, 0xc000183c20)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1075
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2187 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bf9a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bf9a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bf9a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bf9a00, 0xc0019fc280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 161 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000880f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 148
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 162 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a29740, 0xc000182240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 148
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2175 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e0000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0013e0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0013e0000, 0x1261e000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2186 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bf9860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bf9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bf9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bf9860, 0xc0019fc200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 165 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a29710, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x1219da00?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000880e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a29740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008a7990, {0x12628be0, 0xc001374480}, 0x1, 0xc000182240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008a7990, 0x3b9aca00, 0x0, 0x1, 0xc000182240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 162
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 166 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x126498f8, 0xc000182240}, 0xc000515f50, 0xc000a13f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x126498f8, 0xc000182240}, 0x0?, 0xc000515f50, 0xc000515f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x126498f8?, 0xc000182240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000515fd0?, 0xfa90864?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 162
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 167 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 166
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 600 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006b1d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006b1d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0006b1d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc0006b1d40, 0x1261df38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2302 [syscall, 7 minutes]:
syscall.syscall6(0xc001361f80?, 0x1000000000010?, 0x1000000004c?, 0x5b04c920?, 0x90?, 0x1400a108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc001477740?, 0xf9250a5?, 0x90?, 0x1259ef80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xfa53765?, 0xc001477774, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000197ce0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001412580)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001412580)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013b2b60, 0xc001412580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x37e
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc001477c20?, {0x12634010, 0xc0018e6560}, 0x1261ef80, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x12634010?, 0xc0018e6560?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc00096ee28, 0x3b9aca00, 0x1a3185c5000, {0xc00096ed08?, 0x1219da00?, 0xf971308?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc0013b2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc0013b2b60, 0xc001a1a4c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2176
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 605 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bf9520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bf9520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperkitDriverSkipUpgrade(0xc000bf9520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:172 +0x2a
testing.tRunner(0xc000bf9520, 0x1261df58)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 601 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bf8ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bf8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000bf8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc000bf8ea0, 0x1261df30)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2107 [chan receive, 33 minutes]:
testing.(*T).Run(0xc0013e01a0, {0x111f7dc7?, 0x4b4306dfc27?}, 0xc001484240)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0013e01a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0013e01a0, 0x1261dfd8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 604 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bf9380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bf9380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperKitDriverInstallOrUpdate(0xc000bf9380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:108 +0x39
testing.tRunner(0xc000bf9380, 0x1261df50)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 910 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0017c2e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 801
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2185 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bf96c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bf96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bf96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bf96c0, 0xc0019fc180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2109 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e0820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0013e0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0013e0820, 0x1261dff0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 911 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001f70440, 0xc000182240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 801
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2189 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006b16c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006b16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006b16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0006b16c0, 0xc0019fc380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1266 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bb8420, 0xc001b16c60)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1265
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2340 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014126e0, 0xc001c51020)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2194
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2191 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013b2680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013b2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013b2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013b2680, 0xc0019fc480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 913 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x126498f8, 0xc000182240}, 0xc0009ff750, 0xc0013b8f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x126498f8, 0xc000182240}, 0x11?, 0xc0009ff750, 0xc0009ff798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x126498f8?, 0xc000182240?}, 0xc00139ed00?, 0xfa56420?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0009ff7d0?, 0xfa90864?, 0xc0007ffc40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 911
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 914 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 913
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2176 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0013e0b60, {0x111fb891?, 0x3005753e800?}, 0xc001a1a4c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0013e0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2b4
testing.tRunner(0xc0013e0b60, 0x1261e028)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2338 [IO wait, 7 minutes]:
internal/poll.runtime_pollWait(0x5aac5b60, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0008a5da0?, 0xc001417283?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0008a5da0, {0xc001417283, 0x57d, 0x57d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bd63f0, {0xc001417283?, 0x5ad4ec08?, 0x204?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001360840, {0x12627678, 0xc0015f82c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x126277b8, 0xc001360840}, {0x12627678, 0xc0015f82c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0009ff678?, {0x126277b8, 0xc001360840})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x137816c0?, {0x126277b8?, 0xc001360840?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x126277b8, 0xc001360840}, {0x12627738, 0xc000bd63f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000183bc0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2194
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1308 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc001b057a0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1331
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2182 [chan receive, 33 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000bf8d00, 0xc001484240)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2107
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2188 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006b0b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006b0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006b0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0006b0b60, 0xc0019fc300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2183 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bf9040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bf9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bf9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bf9040, 0xc0019fc000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1800 [syscall, 97 minutes]:
syscall.syscall(0x0?, 0xc001590618?, 0xc0012fefb0?, 0xfa90b55?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc0013d4c30?, 0x1?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1787
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2194 [syscall, 7 minutes]:
syscall.syscall6(0xc001361f80?, 0x1000000000010?, 0x1000000004c?, 0x5b04c920?, 0x90?, 0x1400a108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc001479758?, 0xf9250a5?, 0x90?, 0x1259ef80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xfa53765?, 0xc00147978c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000197ec0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0014126e0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0014126e0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013e0ea0, 0xc0014126e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestMissingContainerUpgrade.func1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:309 +0x66
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc001479ba0?, {0x12634010, 0xc0018e7360}, 0x1261ef80, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x12634010?, 0xc0018e7360?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc001479d10, 0x3b9aca00, 0x1a3185c5000, {0xc001479c70?, 0x1219da00?, 0xfd?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0013e0ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:314 +0x54e
testing.tRunner(0xc0013e0ea0, 0x1261dfb8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2303 [IO wait]:
internal/poll.runtime_pollWait(0x5aac6320, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0008a5920?, 0xc0012fab03?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0008a5920, {0xc0012fab03, 0x4fd, 0x4fd})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bd6310, {0xc0012fab03?, 0xc000057bf0?, 0x233?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013606f0, {0x12627678, 0xc0015f8288})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x126277b8, 0xc0013606f0}, {0x12627678, 0xc0015f8288}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x136f5180?, {0x126277b8, 0xc0013606f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x137816c0?, {0x126277b8?, 0xc0013606f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x126277b8, 0xc0013606f0}, {0x12627738, 0xc000bd6310}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001a1a4c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2302
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2339 [IO wait, 7 minutes]:
internal/poll.runtime_pollWait(0x5aac6228, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0008a5e60?, 0xc000884c00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0008a5e60, {0xc000884c00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bd6420, {0xc000884c00?, 0x5ad4ec08?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001360870, {0x12627678, 0xc0015f82c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x126277b8, 0xc001360870}, {0x12627678, 0xc0015f82c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0009f9678?, {0x126277b8, 0xc001360870})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x137816c0?, {0x126277b8?, 0xc001360870?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x126277b8, 0xc001360870}, {0x12627738, 0xc000bd6420}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000182c60?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2194
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2108 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00002c9b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e0680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0013e0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0013e0680, 0x1261dfe0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2337 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc001412580, 0xc001c50ea0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2302
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2304 [IO wait]:
internal/poll.runtime_pollWait(0x5aac5d50, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0008a59e0?, 0xc00089b863?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0008a59e0, {0xc00089b863, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bd6358, {0xc00089b863?, 0xc0009fd5f0?, 0x63?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001360720, {0x12627678, 0xc0015f8290})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x126277b8, 0xc001360720}, {0x12627678, 0xc0015f8290}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0009fd678?, {0x126277b8, 0xc001360720})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x137816c0?, {0x126277b8?, 0xc001360720?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x126277b8, 0xc001360720}, {0x12627738, 0xc000bd6358}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000182c01?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2302
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                    

Test pass (168/204)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.59
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.72
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.2/json-events 9.57
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.29
18 TestDownloadOnly/v1.30.2/DeleteAll 0.36
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.22
20 TestDownloadOnlyKic 1.57
21 TestBinaryMirror 1.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 243.73
31 TestAddons/parallel/InspektorGadget 11.67
32 TestAddons/parallel/MetricsServer 6.08
33 TestAddons/parallel/HelmTiller 10.81
35 TestAddons/parallel/CSI 56.24
36 TestAddons/parallel/Headlamp 13.04
37 TestAddons/parallel/CloudSpanner 5.56
38 TestAddons/parallel/LocalPath 46.33
39 TestAddons/parallel/NvidiaDevicePlugin 5.5
40 TestAddons/parallel/Yakd 6
41 TestAddons/parallel/Volcano 36.88
44 TestAddons/serial/GCPAuth/Namespaces 0.11
45 TestAddons/StoppedEnableDisable 11.43
56 TestErrorSpam/setup 20.08
57 TestErrorSpam/start 1.81
58 TestErrorSpam/status 0.82
59 TestErrorSpam/pause 1.38
60 TestErrorSpam/unpause 1.42
61 TestErrorSpam/stop 1.92
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 37.11
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 30.48
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.19
73 TestFunctional/serial/CacheCmd/cache/add_local 1.43
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
78 TestFunctional/serial/CacheCmd/cache/delete 0.16
79 TestFunctional/serial/MinikubeKubectlCmd 1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.54
81 TestFunctional/serial/ExtraConfig 41.45
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 3
84 TestFunctional/serial/LogsFileCmd 2.86
85 TestFunctional/serial/InvalidService 4.31
87 TestFunctional/parallel/ConfigCmd 0.5
88 TestFunctional/parallel/DashboardCmd 16.34
89 TestFunctional/parallel/DryRun 1.42
90 TestFunctional/parallel/InternationalLanguage 0.59
91 TestFunctional/parallel/StatusCmd 0.85
96 TestFunctional/parallel/AddonsCmd 0.23
97 TestFunctional/parallel/PersistentVolumeClaim 27.15
99 TestFunctional/parallel/SSHCmd 0.53
100 TestFunctional/parallel/CpCmd 1.9
101 TestFunctional/parallel/MySQL 34.69
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 2.07
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
111 TestFunctional/parallel/License 0.52
112 TestFunctional/parallel/Version/short 0.11
113 TestFunctional/parallel/Version/components 0.69
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.4
119 TestFunctional/parallel/ImageCommands/Setup 2.26
120 TestFunctional/parallel/DockerEnv/bash 1.33
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.03
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.49
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.28
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.58
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.06
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.33
131 TestFunctional/parallel/ServiceCmd/DeployApp 18.17
132 TestFunctional/parallel/ServiceCmd/List 0.31
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
134 TestFunctional/parallel/ServiceCmd/HTTPS 15
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.15
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
146 TestFunctional/parallel/ServiceCmd/Format 15
147 TestFunctional/parallel/ServiceCmd/URL 15
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
149 TestFunctional/parallel/MountCmd/any-port 7.98
150 TestFunctional/parallel/ProfileCmd/profile_list 0.54
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
152 TestFunctional/parallel/MountCmd/specific-port 1.95
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.33
154 TestFunctional/delete_addon-resizer_images 0.07
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 94.67
161 TestMultiControlPlane/serial/DeployApp 5.33
162 TestMultiControlPlane/serial/PingHostFromPods 1.33
163 TestMultiControlPlane/serial/AddWorkerNode 19.38
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
166 TestMultiControlPlane/serial/CopyFile 16.35
167 TestMultiControlPlane/serial/StopSecondaryNode 11.41
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
169 TestMultiControlPlane/serial/RestartSecondaryNode 24.67
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.33
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 136.44
172 TestMultiControlPlane/serial/DeleteSecondaryNode 10.85
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
174 TestMultiControlPlane/serial/StopCluster 32.43
175 TestMultiControlPlane/serial/RestartCluster 107.28
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
177 TestMultiControlPlane/serial/AddSecondaryNode 35.86
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
181 TestImageBuild/serial/Setup 19.8
182 TestImageBuild/serial/NormalBuild 1.71
183 TestImageBuild/serial/BuildWithBuildArg 0.85
184 TestImageBuild/serial/BuildWithDockerIgnore 0.71
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.69
189 TestJSONOutput/start/Command 73.1
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.47
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.47
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.7
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.58
214 TestKicCustomNetwork/create_custom_network 20.93
215 TestKicCustomNetwork/use_default_bridge_network 20.88
216 TestKicExistingNetwork 21.01
217 TestKicCustomSubnet 21.18
218 TestKicStaticIP 21.23
219 TestMainNoArgs 0.08
220 TestMinikubeProfile 43.26
223 TestMountStart/serial/StartWithMountFirst 6.49
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 6.43
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.66
228 TestMountStart/serial/VerifyMountPostDelete 0.25
229 TestMountStart/serial/Stop 1.42
230 TestMountStart/serial/RestartStopped 7.86
240 TestMultiNode/serial/CopyFile 0.08
250 TestPreload 117.01
x
+
TestDownloadOnly/v1.20.0/json-events (19.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-970000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-970000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (19.590402095s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0703 15:46:23.621396    1695 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0703 15:46:23.621558    1695 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-970000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-970000: exit status 85 (294.689629ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-970000 | jenkins | v1.33.1 | 03 Jul 24 15:46 PDT |          |
	|         | -p download-only-970000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 15:46:04
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 15:46:04.073565    1696 out.go:291] Setting OutFile to fd 1 ...
	I0703 15:46:04.073777    1696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 15:46:04.073782    1696 out.go:304] Setting ErrFile to fd 2...
	I0703 15:46:04.073786    1696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 15:46:04.073958    1696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	W0703 15:46:04.074059    1696 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18998-1161/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18998-1161/.minikube/config/config.json: no such file or directory
	I0703 15:46:04.075718    1696 out.go:298] Setting JSON to true
	I0703 15:46:04.101533    1696 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":933,"bootTime":1720045831,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 15:46:04.101621    1696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 15:46:04.125712    1696 out.go:97] [download-only-970000] minikube v1.33.1 on Darwin 14.5
	W0703 15:46:04.125893    1696 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball: no such file or directory
	I0703 15:46:04.125930    1696 notify.go:220] Checking for updates...
	I0703 15:46:04.146281    1696 out.go:169] MINIKUBE_LOCATION=18998
	I0703 15:46:04.167688    1696 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 15:46:04.188588    1696 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 15:46:04.209424    1696 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 15:46:04.230771    1696 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	W0703 15:46:04.272523    1696 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 15:46:04.272776    1696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 15:46:04.304794    1696 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 15:46:04.304940    1696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 15:46:04.393513    1696 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:58 SystemTime:2024-07-03 22:46:04.370482613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 15:46:04.414738    1696 out.go:97] Using the docker driver based on user configuration
	I0703 15:46:04.414764    1696 start.go:297] selected driver: docker
	I0703 15:46:04.414773    1696 start.go:901] validating driver "docker" against <nil>
	I0703 15:46:04.414906    1696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 15:46:04.501399    1696 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:58 SystemTime:2024-07-03 22:46:04.478970254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 15:46:04.501577    1696 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 15:46:04.505839    1696 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=32095MB
	I0703 15:46:04.506325    1696 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 15:46:04.528050    1696 out.go:169] Using Docker Desktop driver with root privileges
	I0703 15:46:04.548787    1696 cni.go:84] Creating CNI manager for ""
	I0703 15:46:04.548820    1696 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0703 15:46:04.548936    1696 start.go:340] cluster config:
	{Name:download-only-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 15:46:04.569757    1696 out.go:97] Starting "download-only-970000" primary control-plane node in "download-only-970000" cluster
	I0703 15:46:04.569827    1696 cache.go:121] Beginning downloading kic base image for docker with docker
	I0703 15:46:04.592757    1696 out.go:97] Pulling base image v0.0.44-1719972989-19184 ...
	I0703 15:46:04.592822    1696 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0703 15:46:04.592884    1696 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 15:46:04.614239    1696 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 to local cache
	I0703 15:46:04.614640    1696 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local cache directory
	I0703 15:46:04.614797    1696 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 to local cache
	I0703 15:46:04.646564    1696 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0703 15:46:04.646582    1696 cache.go:56] Caching tarball of preloaded images
	I0703 15:46:04.646956    1696 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0703 15:46:04.667825    1696 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0703 15:46:04.667843    1696 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0703 15:46:04.748653    1696 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0703 15:46:13.924316    1696 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 as a tarball
	
	
	* The control-plane node download-only-970000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-970000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-970000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (9.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-510000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-510000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=docker : (9.565988424s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (9.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
I0703 15:46:34.419492    1695 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0703 15:46:34.419535    1695 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-510000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-510000: exit status 85 (291.533923ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-970000 | jenkins | v1.33.1 | 03 Jul 24 15:46 PDT |                     |
	|         | -p download-only-970000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Jul 24 15:46 PDT | 03 Jul 24 15:46 PDT |
	| delete  | -p download-only-970000        | download-only-970000 | jenkins | v1.33.1 | 03 Jul 24 15:46 PDT | 03 Jul 24 15:46 PDT |
	| start   | -o=json --download-only        | download-only-510000 | jenkins | v1.33.1 | 03 Jul 24 15:46 PDT |                     |
	|         | -p download-only-510000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 15:46:24
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 15:46:24.903736    1759 out.go:291] Setting OutFile to fd 1 ...
	I0703 15:46:24.904308    1759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 15:46:24.904336    1759 out.go:304] Setting ErrFile to fd 2...
	I0703 15:46:24.904347    1759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 15:46:24.904816    1759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 15:46:24.906324    1759 out.go:298] Setting JSON to true
	I0703 15:46:24.931717    1759 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":953,"bootTime":1720045831,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 15:46:24.931808    1759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 15:46:24.953014    1759 out.go:97] [download-only-510000] minikube v1.33.1 on Darwin 14.5
	I0703 15:46:24.953204    1759 notify.go:220] Checking for updates...
	I0703 15:46:24.974678    1759 out.go:169] MINIKUBE_LOCATION=18998
	I0703 15:46:24.995737    1759 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 15:46:25.016808    1759 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 15:46:25.038853    1759 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 15:46:25.059880    1759 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	W0703 15:46:25.101685    1759 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 15:46:25.102099    1759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 15:46:25.131546    1759 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 15:46:25.131687    1759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 15:46:25.215663    1759 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:58 SystemTime:2024-07-03 22:46:25.20242265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-
g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sch
emaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/dock
er/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 15:46:25.236792    1759 out.go:97] Using the docker driver based on user configuration
	I0703 15:46:25.236815    1759 start.go:297] selected driver: docker
	I0703 15:46:25.236826    1759 start.go:901] validating driver "docker" against <nil>
	I0703 15:46:25.236971    1759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 15:46:25.321208    1759 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:58 SystemTime:2024-07-03 22:46:25.308043021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 15:46:25.321402    1759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 15:46:25.324553    1759 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=32095MB
	I0703 15:46:25.324703    1759 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 15:46:25.347035    1759 out.go:169] Using Docker Desktop driver with root privileges
	I0703 15:46:25.367875    1759 cni.go:84] Creating CNI manager for ""
	I0703 15:46:25.367905    1759 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0703 15:46:25.367921    1759 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 15:46:25.368040    1759 start.go:340] cluster config:
	{Name:download-only-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 15:46:25.388774    1759 out.go:97] Starting "download-only-510000" primary control-plane node in "download-only-510000" cluster
	I0703 15:46:25.388803    1759 cache.go:121] Beginning downloading kic base image for docker with docker
	I0703 15:46:25.410785    1759 out.go:97] Pulling base image v0.0.44-1719972989-19184 ...
	I0703 15:46:25.410823    1759 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 15:46:25.410893    1759 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 15:46:25.431355    1759 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 to local cache
	I0703 15:46:25.431549    1759 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local cache directory
	I0703 15:46:25.431565    1759 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local cache directory, skipping pull
	I0703 15:46:25.431571    1759 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 exists in cache, skipping pull
	I0703 15:46:25.431579    1759 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 as a tarball
	I0703 15:46:25.462740    1759 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0703 15:46:25.462761    1759 cache.go:56] Caching tarball of preloaded images
	I0703 15:46:25.463051    1759 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 15:46:25.483895    1759 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0703 15:46:25.483946    1759 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0703 15:46:25.565421    1759 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f94875995e68df9a8856f3277eea0126 -> /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0703 15:46:29.858023    1759 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0703 15:46:29.858502    1759 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18998-1161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-510000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-510000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-510000
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-803000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-803000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-803000
--- PASS: TestDownloadOnlyKic (1.57s)

                                                
                                    
x
+
TestBinaryMirror (1.34s)

                                                
                                                
=== RUN   TestBinaryMirror
I0703 15:46:37.290742    1695 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/darwin/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-831000 --alsologtostderr --binary-mirror http://127.0.0.1:49345 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-831000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-831000
--- PASS: TestBinaryMirror (1.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-267000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-267000: exit status 85 (206.355157ms)

                                                
                                                
-- stdout --
	* Profile "addons-267000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-267000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-267000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-267000: exit status 85 (185.982689ms)

                                                
                                                
-- stdout --
	* Profile "addons-267000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-267000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (243.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-267000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-267000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m3.732176654s)
--- PASS: TestAddons/Setup (243.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rsr8w" [b17013be-f122-446e-82a8-16e4790fbbed] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003240833s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-267000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-267000: (5.668590863s)
--- PASS: TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.08s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.508066ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-57hc5" [28ff338e-69f7-4d3c-8114-dd0fd166edd0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004219043s
addons_test.go:417: (dbg) Run:  kubectl --context addons-267000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-267000 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-darwin-amd64 -p addons-267000 addons disable metrics-server --alsologtostderr -v=1: (1.008766905s)
--- PASS: TestAddons/parallel/MetricsServer (6.08s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.051016ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-mqggk" [0aed5ac7-5f1d-4840-a9df-c4a748166409] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004063431s
addons_test.go:475: (dbg) Run:  kubectl --context addons-267000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-267000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.201153676s)
addons_test.go:480: kubectl --context addons-267000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-267000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0703 15:50:55.612821    1695 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0703 15:50:55.616627    1695 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0703 15:50:55.616640    1695 kapi.go:107] duration metric: took 3.825017ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:563: csi-hostpath-driver pods stabilized in 3.831005ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-267000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-267000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cc384593-b0db-46e0-92b5-2aac8968ff5e] Pending
helpers_test.go:344: "task-pv-pod" [cc384593-b0db-46e0-92b5-2aac8968ff5e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cc384593-b0db-46e0-92b5-2aac8968ff5e] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003393435s
addons_test.go:586: (dbg) Run:  kubectl --context addons-267000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-267000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-267000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-267000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-267000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-267000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-267000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2da69da1-9c54-419b-96a9-3f9100c1393f] Pending
helpers_test.go:344: "task-pv-pod-restore" [2da69da1-9c54-419b-96a9-3f9100c1393f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2da69da1-9c54-419b-96a9-3f9100c1393f] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003248481s
addons_test.go:628: (dbg) Run:  kubectl --context addons-267000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-267000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-267000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-267000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-267000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.537486004s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-267000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-267000 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-267000 --alsologtostderr -v=1: (1.035253755s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-xbdcq" [6d6ee401-2fcc-45bf-b49a-ac5779309c17] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-xbdcq" [6d6ee401-2fcc-45bf-b49a-ac5779309c17] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.00473524s
--- PASS: TestAddons/parallel/Headlamp (13.04s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-wfkxn" [9339ed24-d79c-4c82-82a5-c1f8e2a85c4f] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002893534s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-267000
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (46.33s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-267000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-267000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-267000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [825b4512-5e6f-46b9-bcc7-8bf52fa800c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [825b4512-5e6f-46b9-bcc7-8bf52fa800c4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [825b4512-5e6f-46b9-bcc7-8bf52fa800c4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003560513s
addons_test.go:992: (dbg) Run:  kubectl --context addons-267000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-267000 ssh "cat /opt/local-path-provisioner/pvc-44538ceb-32d5-44f7-b9b1-7cf50af63814_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-267000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-267000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-267000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-amd64 -p addons-267000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (36.554231943s)
--- PASS: TestAddons/parallel/LocalPath (46.33s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zkzzk" [eb43c744-4864-4762-a557-a8d8f6115b95] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004938706s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-267000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-688n9" [975034f9-e44e-4f7c-a92b-99e451caa005] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003655494s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (36.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:897: volcano-admission stabilized in 2.984315ms
addons_test.go:905: volcano-controller stabilized in 3.328049ms
addons_test.go:889: volcano-scheduler stabilized in 3.88256ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-jxlq4" [57ccb992-f216-4815-baac-ac78d21602f6] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.003928232s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-x4xq6" [7ef568ac-c2ee-4034-91d4-9e147098da23] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.004640642s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-n97ln" [2d830db4-6c81-474d-8b8c-0eb97c769432] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.004156793s
addons_test.go:924: (dbg) Run:  kubectl --context addons-267000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-267000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-267000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d066c180-8778-492a-add2-ec75f3c0bb8b] Pending
helpers_test.go:344: "test-job-nginx-0" [d066c180-8778-492a-add2-ec75f3c0bb8b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d066c180-8778-492a-add2-ec75f3c0bb8b] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 10.002945663s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-267000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-267000 addons disable volcano --alsologtostderr -v=1: (10.621832072s)
--- PASS: TestAddons/parallel/Volcano (36.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-267000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-267000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-267000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-267000: (10.869052213s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-267000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-267000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-267000
--- PASS: TestAddons/StoppedEnableDisable (11.43s)

                                                
                                    
x
+
TestErrorSpam/setup (20.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-837000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-837000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 --driver=docker : (20.079604856s)
--- PASS: TestErrorSpam/setup (20.08s)

                                                
                                    
x
+
TestErrorSpam/start (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 start --dry-run
--- PASS: TestErrorSpam/start (1.81s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 stop: (1.423625015s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-837000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-837000 stop
--- PASS: TestErrorSpam/stop (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18998-1161/.minikube/files/etc/test/nested/copy/1695/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-625000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.110664475s)
--- PASS: TestFunctional/serial/StartWithProxy (37.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0703 15:54:23.198558    1695 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-625000 --alsologtostderr -v=8: (30.478547165s)
functional_test.go:659: soft start took 30.478968526s for "functional-625000" cluster.
I0703 15:54:53.677618    1695 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestFunctional/serial/SoftStart (30.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-625000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:3.1: (1.134815407s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:3.3: (1.059712857s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3957740980/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache add minikube-local-cache-test:functional-625000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache delete minikube-local-cache-test:functional-625000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-625000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (257.938461ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 kubectl -- --context functional-625000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 kubectl -- --context functional-625000 get pods: (1.000539944s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-625000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-625000 get pods: (1.535749395s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0703 15:55:42.580028    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 15:55:42.585601    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 15:55:42.596441    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 15:55:42.618526    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 15:55:42.658820    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 15:55:42.739691    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 15:55:42.899950    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 15:55:43.221525    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 15:55:43.861999    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-625000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.450429069s)
functional_test.go:757: restart took 41.450536289s for "functional-625000" cluster.
I0703 15:55:44.512923    1695 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestFunctional/serial/ExtraConfig (41.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-625000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 logs
E0703 15:55:45.142192    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 logs: (2.996106892s)
--- PASS: TestFunctional/serial/LogsCmd (3.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4268580775/001/logs.txt
E0703 15:55:47.702537    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4268580775/001/logs.txt: (2.85400015s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/invalidsvc.yaml
E0703 15:55:52.822734    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-625000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-625000: exit status 115 (395.289148ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30459 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-625000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 config get cpus: exit status 14 (62.211493ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 config get cpus: exit status 14 (59.319516ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-625000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-625000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3409: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-625000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (744.88823ms)

                                                
                                                
-- stdout --
	* [functional-625000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 15:57:22.125895    3333 out.go:291] Setting OutFile to fd 1 ...
	I0703 15:57:22.126205    3333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 15:57:22.126211    3333 out.go:304] Setting ErrFile to fd 2...
	I0703 15:57:22.126215    3333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 15:57:22.126412    3333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 15:57:22.128342    3333 out.go:298] Setting JSON to false
	I0703 15:57:22.152182    3333 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1611,"bootTime":1720045831,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 15:57:22.152269    3333 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 15:57:22.194504    3333 out.go:177] * [functional-625000] minikube v1.33.1 on Darwin 14.5
	I0703 15:57:22.215482    3333 notify.go:220] Checking for updates...
	I0703 15:57:22.237334    3333 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 15:57:22.279427    3333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 15:57:22.321338    3333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 15:57:22.363388    3333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 15:57:22.426293    3333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	I0703 15:57:22.468408    3333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 15:57:22.489840    3333 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 15:57:22.490374    3333 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 15:57:22.519432    3333 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 15:57:22.519630    3333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 15:57:22.610774    3333 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:71 SystemTime:2024-07-03 22:57:22.602353345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 15:57:22.633583    3333 out.go:177] * Using the docker driver based on existing profile
	I0703 15:57:22.675524    3333 start.go:297] selected driver: docker
	I0703 15:57:22.675539    3333 start.go:901] validating driver "docker" against &{Name:functional-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-625000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 15:57:22.675613    3333 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 15:57:22.719350    3333 out.go:177] 
	W0703 15:57:22.740559    3333 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0703 15:57:22.761381    3333 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-625000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (593.718924ms)

                                                
                                                
-- stdout --
	* [functional-625000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=18998
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 15:57:23.528152    3383 out.go:291] Setting OutFile to fd 1 ...
	I0703 15:57:23.528317    3383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 15:57:23.528322    3383 out.go:304] Setting ErrFile to fd 2...
	I0703 15:57:23.528326    3383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 15:57:23.528516    3383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 15:57:23.530034    3383 out.go:298] Setting JSON to false
	I0703 15:57:23.553587    3383 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1612,"bootTime":1720045831,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0703 15:57:23.553690    3383 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0703 15:57:23.575484    3383 out.go:177] * [functional-625000] minikube v1.33.1 sur Darwin 14.5
	I0703 15:57:23.617498    3383 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 15:57:23.617529    3383 notify.go:220] Checking for updates...
	I0703 15:57:23.661445    3383 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig
	I0703 15:57:23.703425    3383 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0703 15:57:23.724521    3383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 15:57:23.745546    3383 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube
	I0703 15:57:23.766342    3383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 15:57:23.790131    3383 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 15:57:23.790910    3383 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 15:57:23.817600    3383 docker.go:122] docker version: linux-26.1.4:Docker Desktop 4.31.0 (153195)
	I0703 15:57:23.817769    3383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 15:57:23.902908    3383 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:71 SystemTime:2024-07-03 22:57:23.893817184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.31-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33654255616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.24] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.2.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.9.3]] Warnings:<nil>}}
	I0703 15:57:23.924494    3383 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0703 15:57:23.966364    3383 start.go:297] selected driver: docker
	I0703 15:57:23.966384    3383 start.go:901] validating driver "docker" against &{Name:functional-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-625000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 15:57:23.966477    3383 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 15:57:23.990546    3383 out.go:177] 
	W0703 15:57:24.011370    3383 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0703 15:57:24.032390    3383 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [49e86be6-eb16-4e10-8fdc-8edad59dd7f3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005542562s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-625000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-625000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f97f3b9f-30be-4267-92b8-fcbdf4175a0a] Pending
helpers_test.go:344: "sp-pod" [f97f3b9f-30be-4267-92b8-fcbdf4175a0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f97f3b9f-30be-4267-92b8-fcbdf4175a0a] Running
E0703 15:57:04.504342    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004277737s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-625000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-625000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d4499528-4e79-4e33-8d36-5f20b242e22a] Pending
helpers_test.go:344: "sp-pod" [d4499528-4e79-4e33-8d36-5f20b242e22a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d4499528-4e79-4e33-8d36-5f20b242e22a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003305614s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-625000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -n functional-625000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cp functional-625000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd4039389880/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -n functional-625000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -n functional-625000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-625000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-xlrmd" [53adc66d-e9e9-43da-acfe-a239cc83342d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-xlrmd" [53adc66d-e9e9-43da-acfe-a239cc83342d] Running
E0703 15:56:23.543877    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.004209001s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-625000 exec mysql-64454c8b5c-xlrmd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-625000 exec mysql-64454c8b5c-xlrmd -- mysql -ppassword -e "show databases;": exit status 1 (178.373848ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0703 15:56:28.707053    1695 retry.go:31] will retry after 713.809365ms: exit status 1
functional_test.go:1803: (dbg) Run:  kubectl --context functional-625000 exec mysql-64454c8b5c-xlrmd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-625000 exec mysql-64454c8b5c-xlrmd -- mysql -ppassword -e "show databases;": exit status 1 (119.07308ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0703 15:56:29.540931    1695 retry.go:31] will retry after 2.067799848s: exit status 1
functional_test.go:1803: (dbg) Run:  kubectl --context functional-625000 exec mysql-64454c8b5c-xlrmd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-625000 exec mysql-64454c8b5c-xlrmd -- mysql -ppassword -e "show databases;": exit status 1 (110.753314ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0703 15:56:31.721989    1695 retry.go:31] will retry after 3.229329479s: exit status 1
functional_test.go:1803: (dbg) Run:  kubectl --context functional-625000 exec mysql-64454c8b5c-xlrmd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1695/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/test/nested/copy/1695/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1695.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/ssl/certs/1695.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1695.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /usr/share/ca-certificates/1695.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/ssl/certs/16952.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /usr/share/ca-certificates/16952.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-625000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "sudo systemctl is-active crio": exit status 1 (384.007942ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-625000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-625000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-625000
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image ls --format short --alsologtostderr:
I0703 15:57:34.795123    3569 out.go:291] Setting OutFile to fd 1 ...
I0703 15:57:34.795664    3569 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:34.795671    3569 out.go:304] Setting ErrFile to fd 2...
I0703 15:57:34.795675    3569 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:34.795900    3569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
I0703 15:57:34.796557    3569 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:34.796671    3569 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:34.797126    3569 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0703 15:57:34.832739    3569 ssh_runner.go:195] Run: systemctl --version
I0703 15:57:34.832942    3569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0703 15:57:34.876576    3569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50104 SSHKeyPath:/Users/jenkins/minikube-integration/18998-1161/.minikube/machines/functional-625000/id_rsa Username:docker}
I0703 15:57:34.976691    3569 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-625000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.30.2           | 53c535741fb44 | 84.7MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-625000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-625000 | a64c8a42351e8 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.2           | 7820c83aa1394 | 62MB   |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/kube-apiserver              | v1.30.2           | 56ce0fd9fb532 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e874818b3caac | 111MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-625000 | 6f08966e3d962 | 1.24MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image ls --format table --alsologtostderr:
I0703 15:57:37.974263    3608 out.go:291] Setting OutFile to fd 1 ...
I0703 15:57:37.974552    3608 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:37.974558    3608 out.go:304] Setting ErrFile to fd 2...
I0703 15:57:37.974562    3608 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:37.974751    3608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
I0703 15:57:37.975383    3608 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:37.975477    3608 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:37.975897    3608 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0703 15:57:37.997203    3608 ssh_runner.go:195] Run: systemctl --version
I0703 15:57:37.997275    3608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0703 15:57:38.018885    3608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50104 SSHKeyPath:/Users/jenkins/minikube-integration/18998-1161/.minikube/machines/functional-625000/id_rsa Username:docker}
I0703 15:57:38.107716    3608 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/07/03 15:57:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-625000 image ls --format json --alsologtostderr:
[{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"62000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-625000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e
7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6f08966e3d9627d02485e82c847cd16294f7e510b5fb84a4a42259543b39ca29","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-625000"],"size":"1240000"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"111000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117000000"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
"repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"84700000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"a64c8a42351e8028db8aa42187db6fe7c5ad09cc8bcf50a6d2f0f3d1ee0e61f5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-625000"],"size":"30"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cno
ne\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image ls --format json --alsologtostderr:
I0703 15:57:37.737937    3604 out.go:291] Setting OutFile to fd 1 ...
I0703 15:57:37.738203    3604 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:37.738208    3604 out.go:304] Setting ErrFile to fd 2...
I0703 15:57:37.738212    3604 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:37.738376    3604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
I0703 15:57:37.739024    3604 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:37.739119    3604 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:37.739491    3604 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0703 15:57:37.760938    3604 ssh_runner.go:195] Run: systemctl --version
I0703 15:57:37.761008    3604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0703 15:57:37.782774    3604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50104 SSHKeyPath:/Users/jenkins/minikube-integration/18998-1161/.minikube/machines/functional-625000/id_rsa Username:docker}
I0703 15:57:37.871011    3604 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-625000 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: a64c8a42351e8028db8aa42187db6fe7c5ad09cc8bcf50a6d2f0f3d1ee0e61f5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-625000
size: "30"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "111000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-625000
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "84700000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117000000"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "62000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image ls --format yaml --alsologtostderr:
I0703 15:57:35.095820    3582 out.go:291] Setting OutFile to fd 1 ...
I0703 15:57:35.096071    3582 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:35.096081    3582 out.go:304] Setting ErrFile to fd 2...
I0703 15:57:35.096085    3582 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:35.096290    3582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
I0703 15:57:35.096883    3582 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:35.096980    3582 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:35.097452    3582 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0703 15:57:35.120550    3582 ssh_runner.go:195] Run: systemctl --version
I0703 15:57:35.120664    3582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0703 15:57:35.144576    3582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50104 SSHKeyPath:/Users/jenkins/minikube-integration/18998-1161/.minikube/machines/functional-625000/id_rsa Username:docker}
I0703 15:57:35.230864    3582 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh pgrep buildkitd: exit status 1 (251.752178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image build -t localhost/my-image:functional-625000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image build -t localhost/my-image:functional-625000 testdata/build --alsologtostderr: (1.918640449s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image build -t localhost/my-image:functional-625000 testdata/build --alsologtostderr:
I0703 15:57:35.586749    3593 out.go:291] Setting OutFile to fd 1 ...
I0703 15:57:35.587021    3593 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:35.587027    3593 out.go:304] Setting ErrFile to fd 2...
I0703 15:57:35.587031    3593 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 15:57:35.587214    3593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
I0703 15:57:35.587821    3593 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:35.589252    3593 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 15:57:35.589687    3593 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0703 15:57:35.611468    3593 ssh_runner.go:195] Run: systemctl --version
I0703 15:57:35.611533    3593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0703 15:57:35.634624    3593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50104 SSHKeyPath:/Users/jenkins/minikube-integration/18998-1161/.minikube/machines/functional-625000/id_rsa Username:docker}
I0703 15:57:35.722270    3593 build_images.go:161] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.1164242990.tar
I0703 15:57:35.722369    3593 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0703 15:57:35.731585    3593 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1164242990.tar
I0703 15:57:35.736508    3593 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1164242990.tar: stat -c "%s %y" /var/lib/minikube/build/build.1164242990.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1164242990.tar': No such file or directory
I0703 15:57:35.736544    3593 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.1164242990.tar --> /var/lib/minikube/build/build.1164242990.tar (3072 bytes)
I0703 15:57:35.757486    3593 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1164242990
I0703 15:57:35.766284    3593 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1164242990 -xf /var/lib/minikube/build/build.1164242990.tar
I0703 15:57:35.777624    3593 docker.go:360] Building image: /var/lib/minikube/build/build.1164242990
I0703 15:57:35.777704    3593 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-625000 /var/lib/minikube/build/build.1164242990
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:6f08966e3d9627d02485e82c847cd16294f7e510b5fb84a4a42259543b39ca29 done
#8 naming to localhost/my-image:functional-625000 done
#8 DONE 0.0s
I0703 15:57:37.384242    3593 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-625000 /var/lib/minikube/build/build.1164242990: (1.606509854s)
I0703 15:57:37.384299    3593 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1164242990
I0703 15:57:37.392627    3593 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1164242990.tar
I0703 15:57:37.400870    3593 build_images.go:217] Built localhost/my-image:functional-625000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.1164242990.tar
I0703 15:57:37.400899    3593 build_images.go:133] succeeded building to: functional-625000
I0703 15:57:37.400904    3593 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.229662683s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-625000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-625000 docker-env) && out/minikube-darwin-amd64 status -p functional-625000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-625000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr: (3.797388799s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
E0703 15:56:03.063574    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr: (2.23384437s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.937318297s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-625000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr: (4.022681737s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image save gcr.io/google-containers/addon-resizer:functional-625000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image save gcr.io/google-containers/addon-resizer:functional-625000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.578587085s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image rm gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.808178711s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-625000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image save --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image save --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr: (1.281519548s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-625000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-625000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-625000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-d2d47" [67d4a53b-0a2b-4e95-9d22-2da761175e28] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-d2d47" [67d4a53b-0a2b-4e95-9d22-2da761175e28] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.006566303s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service list -o json
functional_test.go:1490: Took "308.267558ms" to run "out/minikube-darwin-amd64 -p functional-625000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 service --namespace=default --https --url hello-node: signal: killed (15.00237778s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50359

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:50359
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3183: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2f5f3280-ad1a-4924-b461-8ec534a1887f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2f5f3280-ad1a-4924-b461-8ec534a1887f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003361459s
I0703 15:56:45.236224    1695 kapi.go:184] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-625000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3210: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 service hello-node --url --format={{.IP}}: signal: killed (15.002672059s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 service hello-node --url: signal: killed (15.002272466s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50428

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:50428
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2850545564/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1720047440862247000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2850545564/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1720047440862247000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2850545564/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1720047440862247000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2850545564/001/test-1720047440862247000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.091624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 15:57:21.151049    1695 retry.go:31] will retry after 460.805908ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  3 22:57 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  3 22:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  3 22:57 test-1720047440862247000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh cat /mount-9p/test-1720047440862247000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-625000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [858043dd-64f0-47e8-8664-a53a70196152] Pending
helpers_test.go:344: "busybox-mount" [858043dd-64f0-47e8-8664-a53a70196152] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [858043dd-64f0-47e8-8664-a53a70196152] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [858043dd-64f0-47e8-8664-a53a70196152] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003194648s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-625000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2850545564/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "466.092305ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "78.403361ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "382.773669ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "84.357763ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1619151092/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.710626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 15:57:29.154623    1695 retry.go:31] will retry after 579.785468ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1619151092/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "sudo umount -f /mount-9p": exit status 1 (253.986193ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-625000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1619151092/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1408740517/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1408740517/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1408740517/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount1: exit status 1 (419.688815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 15:57:31.229112    1695 retry.go:31] will retry after 357.551194ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-625000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1408740517/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1408740517/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1408740517/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.33s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-625000
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-625000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-625000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (94.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-378000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0703 15:58:26.425614    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-378000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m33.952403384s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (94.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-378000 -- rollout status deployment/busybox: (2.605960533s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-2p654 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-stw9b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-z8bps -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-2p654 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-stw9b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-z8bps -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-2p654 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-stw9b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-z8bps -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-2p654 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-2p654 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-stw9b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-stw9b -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-z8bps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-378000 -- exec busybox-fc5497c4f-z8bps -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-378000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-378000 -v=7 --alsologtostderr: (18.516611679s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-378000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp testdata/cp-test.txt ha-378000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile4182506725/001/cp-test_ha-378000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000:/home/docker/cp-test.txt ha-378000-m02:/home/docker/cp-test_ha-378000_ha-378000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m02 "sudo cat /home/docker/cp-test_ha-378000_ha-378000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000:/home/docker/cp-test.txt ha-378000-m03:/home/docker/cp-test_ha-378000_ha-378000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m03 "sudo cat /home/docker/cp-test_ha-378000_ha-378000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000:/home/docker/cp-test.txt ha-378000-m04:/home/docker/cp-test_ha-378000_ha-378000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m04 "sudo cat /home/docker/cp-test_ha-378000_ha-378000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp testdata/cp-test.txt ha-378000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile4182506725/001/cp-test_ha-378000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m02:/home/docker/cp-test.txt ha-378000:/home/docker/cp-test_ha-378000-m02_ha-378000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000 "sudo cat /home/docker/cp-test_ha-378000-m02_ha-378000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m02:/home/docker/cp-test.txt ha-378000-m03:/home/docker/cp-test_ha-378000-m02_ha-378000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m03 "sudo cat /home/docker/cp-test_ha-378000-m02_ha-378000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m02:/home/docker/cp-test.txt ha-378000-m04:/home/docker/cp-test_ha-378000-m02_ha-378000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m04 "sudo cat /home/docker/cp-test_ha-378000-m02_ha-378000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp testdata/cp-test.txt ha-378000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile4182506725/001/cp-test_ha-378000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m03:/home/docker/cp-test.txt ha-378000:/home/docker/cp-test_ha-378000-m03_ha-378000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000 "sudo cat /home/docker/cp-test_ha-378000-m03_ha-378000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m03:/home/docker/cp-test.txt ha-378000-m02:/home/docker/cp-test_ha-378000-m03_ha-378000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m02 "sudo cat /home/docker/cp-test_ha-378000-m03_ha-378000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m03:/home/docker/cp-test.txt ha-378000-m04:/home/docker/cp-test_ha-378000-m03_ha-378000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m04 "sudo cat /home/docker/cp-test_ha-378000-m03_ha-378000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp testdata/cp-test.txt ha-378000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m04:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile4182506725/001/cp-test_ha-378000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m04:/home/docker/cp-test.txt ha-378000:/home/docker/cp-test_ha-378000-m04_ha-378000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000 "sudo cat /home/docker/cp-test_ha-378000-m04_ha-378000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m04:/home/docker/cp-test.txt ha-378000-m02:/home/docker/cp-test_ha-378000-m04_ha-378000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m02 "sudo cat /home/docker/cp-test_ha-378000-m04_ha-378000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 cp ha-378000-m04:/home/docker/cp-test.txt ha-378000-m03:/home/docker/cp-test_ha-378000-m04_ha-378000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 ssh -n ha-378000-m03 "sudo cat /home/docker/cp-test_ha-378000-m04_ha-378000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-378000 node stop m02 -v=7 --alsologtostderr: (10.742314049s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr: exit status 7 (664.058481ms)

                                                
                                                
-- stdout --
	ha-378000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-378000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-378000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-378000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 16:00:12.208336    4401 out.go:291] Setting OutFile to fd 1 ...
	I0703 16:00:12.208539    4401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:00:12.208545    4401 out.go:304] Setting ErrFile to fd 2...
	I0703 16:00:12.208549    4401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:00:12.208733    4401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 16:00:12.208950    4401 out.go:298] Setting JSON to false
	I0703 16:00:12.208972    4401 mustload.go:65] Loading cluster: ha-378000
	I0703 16:00:12.209016    4401 notify.go:220] Checking for updates...
	I0703 16:00:12.209298    4401 config.go:182] Loaded profile config "ha-378000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 16:00:12.209316    4401 status.go:174] checking status of ha-378000 ...
	I0703 16:00:12.209720    4401 cli_runner.go:164] Run: docker container inspect ha-378000 --format={{.State.Status}}
	I0703 16:00:12.230930    4401 status.go:364] ha-378000 host status = "Running" (err=<nil>)
	I0703 16:00:12.230980    4401 host.go:66] Checking if "ha-378000" exists ...
	I0703 16:00:12.231257    4401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378000
	I0703 16:00:12.252002    4401 host.go:66] Checking if "ha-378000" exists ...
	I0703 16:00:12.252261    4401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:00:12.252333    4401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378000
	I0703 16:00:12.272671    4401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50566 SSHKeyPath:/Users/jenkins/minikube-integration/18998-1161/.minikube/machines/ha-378000/id_rsa Username:docker}
	I0703 16:00:12.361986    4401 ssh_runner.go:195] Run: systemctl --version
	I0703 16:00:12.366882    4401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 16:00:12.377620    4401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-378000
	I0703 16:00:12.399422    4401 kubeconfig.go:125] found "ha-378000" server: "https://127.0.0.1:50570"
	I0703 16:00:12.399454    4401 api_server.go:166] Checking apiserver status ...
	I0703 16:00:12.399493    4401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 16:00:12.410549    4401 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2402/cgroup
	W0703 16:00:12.419605    4401 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2402/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 16:00:12.419662    4401 ssh_runner.go:195] Run: ls
	I0703 16:00:12.423541    4401 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50570/healthz ...
	I0703 16:00:12.427613    4401 api_server.go:279] https://127.0.0.1:50570/healthz returned 200:
	ok
	I0703 16:00:12.427626    4401 status.go:456] ha-378000 apiserver status = Running (err=<nil>)
	I0703 16:00:12.427636    4401 status.go:176] ha-378000 status: &{Name:ha-378000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 16:00:12.427647    4401 status.go:174] checking status of ha-378000-m02 ...
	I0703 16:00:12.427887    4401 cli_runner.go:164] Run: docker container inspect ha-378000-m02 --format={{.State.Status}}
	I0703 16:00:12.448523    4401 status.go:364] ha-378000-m02 host status = "Stopped" (err=<nil>)
	I0703 16:00:12.448548    4401 status.go:377] host is not running, skipping remaining checks
	I0703 16:00:12.448556    4401 status.go:176] ha-378000-m02 status: &{Name:ha-378000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 16:00:12.448569    4401 status.go:174] checking status of ha-378000-m03 ...
	I0703 16:00:12.448862    4401 cli_runner.go:164] Run: docker container inspect ha-378000-m03 --format={{.State.Status}}
	I0703 16:00:12.469908    4401 status.go:364] ha-378000-m03 host status = "Running" (err=<nil>)
	I0703 16:00:12.469935    4401 host.go:66] Checking if "ha-378000-m03" exists ...
	I0703 16:00:12.470205    4401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378000-m03
	I0703 16:00:12.491328    4401 host.go:66] Checking if "ha-378000-m03" exists ...
	I0703 16:00:12.491611    4401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:00:12.491663    4401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378000-m03
	I0703 16:00:12.512573    4401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50670 SSHKeyPath:/Users/jenkins/minikube-integration/18998-1161/.minikube/machines/ha-378000-m03/id_rsa Username:docker}
	I0703 16:00:12.599513    4401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 16:00:12.610377    4401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-378000
	I0703 16:00:12.632091    4401 kubeconfig.go:125] found "ha-378000" server: "https://127.0.0.1:50570"
	I0703 16:00:12.632115    4401 api_server.go:166] Checking apiserver status ...
	I0703 16:00:12.632151    4401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 16:00:12.642773    4401 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2262/cgroup
	W0703 16:00:12.651454    4401 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2262/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 16:00:12.651520    4401 ssh_runner.go:195] Run: ls
	I0703 16:00:12.655199    4401 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50570/healthz ...
	I0703 16:00:12.659291    4401 api_server.go:279] https://127.0.0.1:50570/healthz returned 200:
	ok
	I0703 16:00:12.659304    4401 status.go:456] ha-378000-m03 apiserver status = Running (err=<nil>)
	I0703 16:00:12.659316    4401 status.go:176] ha-378000-m03 status: &{Name:ha-378000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 16:00:12.659326    4401 status.go:174] checking status of ha-378000-m04 ...
	I0703 16:00:12.659571    4401 cli_runner.go:164] Run: docker container inspect ha-378000-m04 --format={{.State.Status}}
	I0703 16:00:12.679789    4401 status.go:364] ha-378000-m04 host status = "Running" (err=<nil>)
	I0703 16:00:12.679815    4401 host.go:66] Checking if "ha-378000-m04" exists ...
	I0703 16:00:12.680077    4401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378000-m04
	I0703 16:00:12.700252    4401 host.go:66] Checking if "ha-378000-m04" exists ...
	I0703 16:00:12.700509    4401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 16:00:12.700563    4401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378000-m04
	I0703 16:00:12.721020    4401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50796 SSHKeyPath:/Users/jenkins/minikube-integration/18998-1161/.minikube/machines/ha-378000-m04/id_rsa Username:docker}
	I0703 16:00:12.807226    4401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 16:00:12.817633    4401 status.go:176] ha-378000-m04 status: &{Name:ha-378000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-378000 node start m02 -v=7 --alsologtostderr: (23.245397922s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr: (1.363353487s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.326637587s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-378000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-378000 -v=7 --alsologtostderr
E0703 16:00:42.582506    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:01:00.527220    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:00.532555    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:00.542967    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:00.563465    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:00.605610    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:00.686550    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:00.847315    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:01.168173    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:01.808476    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:03.089195    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:05.651540    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:10.270650    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:01:10.776129    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-378000 -v=7 --alsologtostderr: (33.695785896s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-378000 --wait=true -v=7 --alsologtostderr
E0703 16:01:21.026130    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:01:41.514945    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0703 16:02:22.479161    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-378000 --wait=true -v=7 --alsologtostderr: (1m42.625536862s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-378000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-378000 node delete m03 -v=7 --alsologtostderr: (10.073722234s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-378000 stop -v=7 --alsologtostderr: (32.313934797s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr: exit status 7 (119.62409ms)

                                                
                                                
-- stdout --
	ha-378000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-378000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-378000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 16:03:39.912465    4828 out.go:291] Setting OutFile to fd 1 ...
	I0703 16:03:39.912651    4828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:03:39.912656    4828 out.go:304] Setting ErrFile to fd 2...
	I0703 16:03:39.912659    4828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 16:03:39.912857    4828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18998-1161/.minikube/bin
	I0703 16:03:39.913038    4828 out.go:298] Setting JSON to false
	I0703 16:03:39.913060    4828 mustload.go:65] Loading cluster: ha-378000
	I0703 16:03:39.913096    4828 notify.go:220] Checking for updates...
	I0703 16:03:39.913370    4828 config.go:182] Loaded profile config "ha-378000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 16:03:39.913388    4828 status.go:174] checking status of ha-378000 ...
	I0703 16:03:39.913769    4828 cli_runner.go:164] Run: docker container inspect ha-378000 --format={{.State.Status}}
	I0703 16:03:39.935827    4828 status.go:364] ha-378000 host status = "Stopped" (err=<nil>)
	I0703 16:03:39.935851    4828 status.go:377] host is not running, skipping remaining checks
	I0703 16:03:39.935858    4828 status.go:176] ha-378000 status: &{Name:ha-378000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 16:03:39.935882    4828 status.go:174] checking status of ha-378000-m02 ...
	I0703 16:03:39.936153    4828 cli_runner.go:164] Run: docker container inspect ha-378000-m02 --format={{.State.Status}}
	I0703 16:03:39.956980    4828 status.go:364] ha-378000-m02 host status = "Stopped" (err=<nil>)
	I0703 16:03:39.957005    4828 status.go:377] host is not running, skipping remaining checks
	I0703 16:03:39.957010    4828 status.go:176] ha-378000-m02 status: &{Name:ha-378000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 16:03:39.957020    4828 status.go:174] checking status of ha-378000-m04 ...
	I0703 16:03:39.957262    4828 cli_runner.go:164] Run: docker container inspect ha-378000-m04 --format={{.State.Status}}
	I0703 16:03:39.977576    4828 status.go:364] ha-378000-m04 host status = "Stopped" (err=<nil>)
	I0703 16:03:39.977605    4828 status.go:377] host is not running, skipping remaining checks
	I0703 16:03:39.977612    4828 status.go:176] ha-378000-m04 status: &{Name:ha-378000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (107.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-378000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0703 16:03:44.401346    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-378000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m46.500829471s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (107.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-378000 --control-plane -v=7 --alsologtostderr
E0703 16:05:42.610056    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
E0703 16:06:00.554174    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-378000 --control-plane -v=7 --alsologtostderr: (34.977129983s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-378000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (19.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-165000 --driver=docker 
E0703 16:06:28.243593    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/functional-625000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-165000 --driver=docker : (19.800768804s)
--- PASS: TestImageBuild/serial/Setup (19.80s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-165000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-165000: (1.711918344s)
--- PASS: TestImageBuild/serial/NormalBuild (1.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-165000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-165000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-165000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (73.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-501000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-501000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m13.101396714s)
--- PASS: TestJSONOutput/start/Command (73.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-501000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-501000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-501000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-501000 --output=json --user=testUser: (10.700800402s)
--- PASS: TestJSONOutput/stop/Command (10.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-632000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-632000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (356.005064ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9209ee13-1b17-43a0-bf9a-2e35e0b898b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-632000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2393d8c1-ee60-4a27-bb55-888656bed92d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18998"}}
	{"specversion":"1.0","id":"19297276-bd27-42bc-9e39-e2308f1dcb2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18998-1161/kubeconfig"}}
	{"specversion":"1.0","id":"d222ffe4-257e-49d1-883d-0cf644b57071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"f702ea07-906d-4b86-a10c-ac7200dffe93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fe4574b7-c275-4e4e-b0ae-95787bc03149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18998-1161/.minikube"}}
	{"specversion":"1.0","id":"7c0630ac-fb2c-4957-b537-21ccb149eb55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"951dcc41-c7d7-42f2-ba9f-7080015fda0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-632000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-632000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (20.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-562000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-562000 --network=: (18.902461201s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-562000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-562000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-562000: (2.004798308s)
--- PASS: TestKicCustomNetwork/create_custom_network (20.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (20.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-472000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-472000 --network=bridge: (18.949835198s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-472000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-472000: (1.909649693s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (20.88s)

                                                
                                    
x
+
TestKicExistingNetwork (21.01s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0703 16:08:46.727505    1695 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0703 16:08:46.748034    1695 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0703 16:08:46.748134    1695 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0703 16:08:46.748154    1695 cli_runner.go:164] Run: docker network inspect existing-network
W0703 16:08:46.767828    1695 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0703 16:08:46.767849    1695 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0703 16:08:46.767872    1695 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0703 16:08:46.768041    1695 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0703 16:08:46.788712    1695 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001acf5c0}
I0703 16:08:46.788755    1695 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0703 16:08:46.788829    1695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W0703 16:08:46.808868    1695 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W0703 16:08:46.808902    1695 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: Pool overlaps with other one on this address space
W0703 16:08:46.808918    1695 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I0703 16:08:46.810539    1695 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0703 16:08:46.810910    1695 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fa2140}
I0703 16:08:46.810925    1695 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
I0703 16:08:46.810998    1695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0703 16:08:46.867232    1695 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-250000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-250000 --network=existing-network: (18.932112234s)
helpers_test.go:175: Cleaning up "existing-network-250000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-250000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-250000: (1.893037359s)
I0703 16:09:07.712826    1695 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (21.01s)

                                                
                                    
x
+
TestKicCustomSubnet (21.18s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-162000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-162000 --subnet=192.168.60.0/24: (19.180758823s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-162000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-162000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-162000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-162000: (1.980503087s)
--- PASS: TestKicCustomSubnet (21.18s)

                                                
                                    
x
+
TestKicStaticIP (21.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-619000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-619000 --static-ip=192.168.200.200: (19.079915537s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-619000 ip
helpers_test.go:175: Cleaning up "static-ip-619000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-619000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-619000: (1.977631308s)
--- PASS: TestKicStaticIP (21.23s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (43.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-466000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-466000 --driver=docker : (18.894545735s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-468000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-468000 --driver=docker : (19.02986105s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-466000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-468000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-468000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-468000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-468000: (1.998249734s)
helpers_test.go:175: Cleaning up "first-466000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-466000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-466000: (2.03878129s)
--- PASS: TestMinikubeProfile (43.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-039000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-039000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (5.48986258s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-039000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-052000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
E0703 16:10:42.613545    1695 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18998-1161/.minikube/profiles/addons-267000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-052000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (5.43188817s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-052000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-039000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-039000 --alsologtostderr -v=5: (1.65902081s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-052000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-052000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-052000: (1.415160917s)
--- PASS: TestMountStart/serial/Stop (1.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-052000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-052000: (6.856672931s)
--- PASS: TestMountStart/serial/RestartStopped (7.86s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-966000 status --output json --alsologtostderr
--- PASS: TestMultiNode/serial/CopyFile (0.08s)

                                                
                                    
x
+
TestPreload (117.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-794000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-794000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m13.295081853s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-794000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-794000 image pull gcr.io/k8s-minikube/busybox: (1.448591743s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-794000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-794000: (10.713499136s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-794000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-794000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (29.300879036s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-794000 image list
helpers_test.go:175: Cleaning up "test-preload-794000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-794000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-794000: (2.024474087s)
--- PASS: TestPreload (117.01s)

                                                
                                    

Test skip (17/204)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 12.422371ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vfrh8" [54d641c1-697e-4cb8-9ee3-ba69dd7ca59b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005522014s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xtnsv" [0a004e5a-4977-40d0-b913-cddd3fa7c55f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005173653s
addons_test.go:342: (dbg) Run:  kubectl --context addons-267000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-267000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-267000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.834434779s)
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-267000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-267000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-267000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [25dbc320-6d96-44bd-87f4-9684c3d5c18f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [25dbc320-6d96-44bd-87f4-9684c3d5c18f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004858634s
I0703 15:51:37.544117    1695 kapi.go:184] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-267000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.72s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-625000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-625000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-768hl" [41fed361-0980-4ca6-9ccf-3cb601307d99] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-768hl" [41fed361-0980-4ca6-9ccf-3cb601307d99] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.002975733s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard