Test Report: Docker_macOS 18239

                    
                      59a59c81047135cbdfd2a30078659de6ff7ddc30:2024-03-07:33453
                    
                

Test fail (20/211)

x
+
TestOffline (754.67s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-028000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-028000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m33.749367262s)

                                                
                                                
-- stdout --
	* [offline-docker-028000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-028000" primary control-plane node in "offline-docker-028000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-028000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 11:27:29.989143   20993 out.go:291] Setting OutFile to fd 1 ...
	I0307 11:27:29.989340   20993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:27:29.989345   20993 out.go:304] Setting ErrFile to fd 2...
	I0307 11:27:29.989349   20993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:27:29.989522   20993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 11:27:29.991345   20993 out.go:298] Setting JSON to false
	I0307 11:27:30.015466   20993 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8821,"bootTime":1709830829,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 11:27:30.015552   20993 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 11:27:30.037707   20993 out.go:177] * [offline-docker-028000] minikube v1.32.0 on Darwin 14.3.1
	I0307 11:27:30.079282   20993 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 11:27:30.079306   20993 notify.go:220] Checking for updates...
	I0307 11:27:30.100434   20993 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 11:27:30.121238   20993 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 11:27:30.142203   20993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 11:27:30.164189   20993 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	I0307 11:27:30.185132   20993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 11:27:30.206439   20993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 11:27:30.263181   20993 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 11:27:30.263364   20993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 11:27:30.366453   20993 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:false NGoroutines:195 SystemTime:2024-03-07 19:27:30.355094946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 11:27:30.408883   20993 out.go:177] * Using the docker driver based on user configuration
	I0307 11:27:30.429945   20993 start.go:297] selected driver: docker
	I0307 11:27:30.429967   20993 start.go:901] validating driver "docker" against <nil>
	I0307 11:27:30.429985   20993 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 11:27:30.433426   20993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 11:27:30.531785   20993 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:false NGoroutines:195 SystemTime:2024-03-07 19:27:30.521105045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 11:27:30.531966   20993 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 11:27:30.532165   20993 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 11:27:30.552945   20993 out.go:177] * Using Docker Desktop driver with root privileges
	I0307 11:27:30.573965   20993 cni.go:84] Creating CNI manager for ""
	I0307 11:27:30.574010   20993 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 11:27:30.574024   20993 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 11:27:30.574110   20993 start.go:340] cluster config:
	{Name:offline-docker-028000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 11:27:30.594862   20993 out.go:177] * Starting "offline-docker-028000" primary control-plane node in "offline-docker-028000" cluster
	I0307 11:27:30.637029   20993 cache.go:121] Beginning downloading kic base image for docker with docker
	I0307 11:27:30.678974   20993 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 11:27:30.720970   20993 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 11:27:30.721019   20993 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 11:27:30.721042   20993 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 11:27:30.721054   20993 cache.go:56] Caching tarball of preloaded images
	I0307 11:27:30.721225   20993 preload.go:173] Found /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 11:27:30.721240   20993 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 11:27:30.722485   20993 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/offline-docker-028000/config.json ...
	I0307 11:27:30.722588   20993 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/offline-docker-028000/config.json: {Name:mk5abe4adaf838ab3063d7403e9b0dc245200f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 11:27:30.771821   20993 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 11:27:30.771847   20993 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 11:27:30.771865   20993 cache.go:194] Successfully downloaded all kic artifacts
	I0307 11:27:30.771896   20993 start.go:360] acquireMachinesLock for offline-docker-028000: {Name:mkdb8d7ce527dac36c08b95f17bba4b95f74491f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 11:27:30.772035   20993 start.go:364] duration metric: took 128.353µs to acquireMachinesLock for "offline-docker-028000"
	I0307 11:27:30.772061   20993 start.go:93] Provisioning new machine with config: &{Name:offline-docker-028000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-028000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 11:27:30.772283   20993 start.go:125] createHost starting for "" (driver="docker")
	I0307 11:27:30.815128   20993 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0307 11:27:30.815507   20993 start.go:159] libmachine.API.Create for "offline-docker-028000" (driver="docker")
	I0307 11:27:30.815558   20993 client.go:168] LocalClient.Create starting
	I0307 11:27:30.815751   20993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 11:27:30.815846   20993 main.go:141] libmachine: Decoding PEM data...
	I0307 11:27:30.815876   20993 main.go:141] libmachine: Parsing certificate...
	I0307 11:27:30.816038   20993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 11:27:30.816116   20993 main.go:141] libmachine: Decoding PEM data...
	I0307 11:27:30.816132   20993 main.go:141] libmachine: Parsing certificate...
	I0307 11:27:30.816934   20993 cli_runner.go:164] Run: docker network inspect offline-docker-028000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 11:27:30.889270   20993 cli_runner.go:211] docker network inspect offline-docker-028000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 11:27:30.889361   20993 network_create.go:281] running [docker network inspect offline-docker-028000] to gather additional debugging logs...
	I0307 11:27:30.889375   20993 cli_runner.go:164] Run: docker network inspect offline-docker-028000
	W0307 11:27:30.939041   20993 cli_runner.go:211] docker network inspect offline-docker-028000 returned with exit code 1
	I0307 11:27:30.939067   20993 network_create.go:284] error running [docker network inspect offline-docker-028000]: docker network inspect offline-docker-028000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-028000 not found
	I0307 11:27:30.939081   20993 network_create.go:286] output of [docker network inspect offline-docker-028000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-028000 not found
	
	** /stderr **
	I0307 11:27:30.939217   20993 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:27:31.052173   20993 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:27:31.054039   20993 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:27:31.054796   20993 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00220bab0}
	I0307 11:27:31.054836   20993 network_create.go:124] attempt to create docker network offline-docker-028000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0307 11:27:31.055001   20993 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-028000 offline-docker-028000
	I0307 11:27:31.294029   20993 network_create.go:108] docker network offline-docker-028000 192.168.67.0/24 created
	I0307 11:27:31.294074   20993 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-028000" container
	I0307 11:27:31.294183   20993 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 11:27:31.347659   20993 cli_runner.go:164] Run: docker volume create offline-docker-028000 --label name.minikube.sigs.k8s.io=offline-docker-028000 --label created_by.minikube.sigs.k8s.io=true
	I0307 11:27:31.399429   20993 oci.go:103] Successfully created a docker volume offline-docker-028000
	I0307 11:27:31.399548   20993 cli_runner.go:164] Run: docker run --rm --name offline-docker-028000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-028000 --entrypoint /usr/bin/test -v offline-docker-028000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 11:27:32.007594   20993 oci.go:107] Successfully prepared a docker volume offline-docker-028000
	I0307 11:27:32.007630   20993 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 11:27:32.007650   20993 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 11:27:32.007752   20993 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-028000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 11:33:30.823050   20993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:33:30.823235   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:30.877940   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:33:30.878065   20993 retry.go:31] will retry after 212.211855ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:31.090598   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:31.143467   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:33:31.143564   20993 retry.go:31] will retry after 560.399034ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:31.706323   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:31.758897   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:33:31.758993   20993 retry.go:31] will retry after 750.869705ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:32.510322   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:32.562913   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	W0307 11:33:32.563016   20993 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	
	W0307 11:33:32.563038   20993 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:32.563091   20993 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:33:32.563144   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:32.613157   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:33:32.613270   20993 retry.go:31] will retry after 197.038694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:32.811833   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:32.862071   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:33:32.862176   20993 retry.go:31] will retry after 401.982969ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:33.264483   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:33.315352   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:33:33.315450   20993 retry.go:31] will retry after 492.351133ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:33.810060   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:33.864757   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:33:33.864854   20993 retry.go:31] will retry after 561.612677ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:34.426863   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:33:34.476340   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	W0307 11:33:34.476442   20993 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	
	W0307 11:33:34.476459   20993 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:34.476478   20993 start.go:128] duration metric: took 6m3.696970785s to createHost
	I0307 11:33:34.476485   20993 start.go:83] releasing machines lock for "offline-docker-028000", held for 6m3.697230925s
	W0307 11:33:34.476501   20993 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0307 11:33:34.476936   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:34.526812   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:34.526878   20993 delete.go:82] Unable to get host status for offline-docker-028000, assuming it has already been deleted: state: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	W0307 11:33:34.526981   20993 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0307 11:33:34.526991   20993 start.go:728] Will try again in 5 seconds ...
	I0307 11:33:39.529167   20993 start.go:360] acquireMachinesLock for offline-docker-028000: {Name:mkdb8d7ce527dac36c08b95f17bba4b95f74491f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 11:33:39.530230   20993 start.go:364] duration metric: took 839.682µs to acquireMachinesLock for "offline-docker-028000"
	I0307 11:33:39.530328   20993 start.go:96] Skipping create...Using existing machine configuration
	I0307 11:33:39.530345   20993 fix.go:54] fixHost starting: 
	I0307 11:33:39.530757   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:39.583839   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:39.583888   20993 fix.go:112] recreateIfNeeded on offline-docker-028000: state= err=unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:39.583903   20993 fix.go:117] machineExists: false. err=machine does not exist
	I0307 11:33:39.606896   20993 out.go:177] * docker "offline-docker-028000" container is missing, will recreate.
	I0307 11:33:39.649363   20993 delete.go:124] DEMOLISHING offline-docker-028000 ...
	I0307 11:33:39.649490   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:39.699498   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	W0307 11:33:39.699559   20993 stop.go:83] unable to get state: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:39.699581   20993 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:39.699957   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:39.749521   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:39.749590   20993 delete.go:82] Unable to get host status for offline-docker-028000, assuming it has already been deleted: state: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:39.749681   20993 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-028000
	W0307 11:33:39.799144   20993 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-028000 returned with exit code 1
	I0307 11:33:39.799181   20993 kic.go:371] could not find the container offline-docker-028000 to remove it. will try anyways
	I0307 11:33:39.799256   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:39.848386   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	W0307 11:33:39.848440   20993 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:39.848521   20993 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-028000 /bin/bash -c "sudo init 0"
	W0307 11:33:39.898178   20993 cli_runner.go:211] docker exec --privileged -t offline-docker-028000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0307 11:33:39.898212   20993 oci.go:650] error shutdown offline-docker-028000: docker exec --privileged -t offline-docker-028000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:40.900736   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:40.951849   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:40.951904   20993 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:40.951916   20993 oci.go:664] temporary error: container offline-docker-028000 status is  but expect it to be exited
	I0307 11:33:40.951941   20993 retry.go:31] will retry after 316.397705ms: couldn't verify container is exited. %v: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:41.270672   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:41.321547   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:41.321595   20993 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:41.321611   20993 oci.go:664] temporary error: container offline-docker-028000 status is  but expect it to be exited
	I0307 11:33:41.321638   20993 retry.go:31] will retry after 900.38859ms: couldn't verify container is exited. %v: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:42.223203   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:42.276418   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:42.276476   20993 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:42.276485   20993 oci.go:664] temporary error: container offline-docker-028000 status is  but expect it to be exited
	I0307 11:33:42.276510   20993 retry.go:31] will retry after 1.675874935s: couldn't verify container is exited. %v: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:43.954795   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:44.007653   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:44.007700   20993 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:44.007710   20993 oci.go:664] temporary error: container offline-docker-028000 status is  but expect it to be exited
	I0307 11:33:44.007736   20993 retry.go:31] will retry after 2.124425943s: couldn't verify container is exited. %v: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:46.133148   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:46.183419   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:46.183466   20993 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:46.183475   20993 oci.go:664] temporary error: container offline-docker-028000 status is  but expect it to be exited
	I0307 11:33:46.183497   20993 retry.go:31] will retry after 2.256913908s: couldn't verify container is exited. %v: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:48.441993   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:48.492950   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:48.493002   20993 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:48.493013   20993 oci.go:664] temporary error: container offline-docker-028000 status is  but expect it to be exited
	I0307 11:33:48.493033   20993 retry.go:31] will retry after 3.332313422s: couldn't verify container is exited. %v: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:51.825847   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:51.877343   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:51.877391   20993 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:51.877403   20993 oci.go:664] temporary error: container offline-docker-028000 status is  but expect it to be exited
	I0307 11:33:51.877428   20993 retry.go:31] will retry after 4.5580692s: couldn't verify container is exited. %v: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:56.436834   20993 cli_runner.go:164] Run: docker container inspect offline-docker-028000 --format={{.State.Status}}
	W0307 11:33:56.489416   20993 cli_runner.go:211] docker container inspect offline-docker-028000 --format={{.State.Status}} returned with exit code 1
	I0307 11:33:56.489473   20993 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:33:56.489485   20993 oci.go:664] temporary error: container offline-docker-028000 status is  but expect it to be exited
	I0307 11:33:56.489516   20993 oci.go:88] couldn't shut down offline-docker-028000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	 
	I0307 11:33:56.489584   20993 cli_runner.go:164] Run: docker rm -f -v offline-docker-028000
	I0307 11:33:56.539872   20993 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-028000
	W0307 11:33:56.589489   20993 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-028000 returned with exit code 1
	I0307 11:33:56.589606   20993 cli_runner.go:164] Run: docker network inspect offline-docker-028000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:33:56.639431   20993 cli_runner.go:164] Run: docker network rm offline-docker-028000
	I0307 11:33:56.744310   20993 fix.go:124] Sleeping 1 second for extra luck!
	I0307 11:33:57.745561   20993 start.go:125] createHost starting for "" (driver="docker")
	I0307 11:33:57.767453   20993 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0307 11:33:57.767601   20993 start.go:159] libmachine.API.Create for "offline-docker-028000" (driver="docker")
	I0307 11:33:57.767621   20993 client.go:168] LocalClient.Create starting
	I0307 11:33:57.767774   20993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 11:33:57.767849   20993 main.go:141] libmachine: Decoding PEM data...
	I0307 11:33:57.767867   20993 main.go:141] libmachine: Parsing certificate...
	I0307 11:33:57.768020   20993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 11:33:57.768107   20993 main.go:141] libmachine: Decoding PEM data...
	I0307 11:33:57.768121   20993 main.go:141] libmachine: Parsing certificate...
	I0307 11:33:57.789674   20993 cli_runner.go:164] Run: docker network inspect offline-docker-028000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 11:33:57.839907   20993 cli_runner.go:211] docker network inspect offline-docker-028000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 11:33:57.839994   20993 network_create.go:281] running [docker network inspect offline-docker-028000] to gather additional debugging logs...
	I0307 11:33:57.840013   20993 cli_runner.go:164] Run: docker network inspect offline-docker-028000
	W0307 11:33:57.889964   20993 cli_runner.go:211] docker network inspect offline-docker-028000 returned with exit code 1
	I0307 11:33:57.889989   20993 network_create.go:284] error running [docker network inspect offline-docker-028000]: docker network inspect offline-docker-028000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-028000 not found
	I0307 11:33:57.890008   20993 network_create.go:286] output of [docker network inspect offline-docker-028000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-028000 not found
	
	** /stderr **
	I0307 11:33:57.890142   20993 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:33:57.942132   20993 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:33:57.943605   20993 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:33:57.944965   20993 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:33:57.945366   20993 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004c38c0}
	I0307 11:33:57.945406   20993 network_create.go:124] attempt to create docker network offline-docker-028000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0307 11:33:57.945502   20993 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-028000 offline-docker-028000
	W0307 11:33:57.995127   20993 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-028000 offline-docker-028000 returned with exit code 1
	W0307 11:33:57.995166   20993 network_create.go:149] failed to create docker network offline-docker-028000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-028000 offline-docker-028000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0307 11:33:57.995181   20993 network_create.go:116] failed to create docker network offline-docker-028000 192.168.76.0/24, will retry: subnet is taken
	I0307 11:33:57.996553   20993 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:33:57.996935   20993 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000138c0}
	I0307 11:33:57.996947   20993 network_create.go:124] attempt to create docker network offline-docker-028000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0307 11:33:57.997013   20993 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-028000 offline-docker-028000
	W0307 11:33:58.047175   20993 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-028000 offline-docker-028000 returned with exit code 1
	W0307 11:33:58.047212   20993 network_create.go:149] failed to create docker network offline-docker-028000 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-028000 offline-docker-028000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0307 11:33:58.047228   20993 network_create.go:116] failed to create docker network offline-docker-028000 192.168.85.0/24, will retry: subnet is taken
	I0307 11:33:58.048840   20993 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:33:58.049303   20993 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021a6350}
	I0307 11:33:58.049318   20993 network_create.go:124] attempt to create docker network offline-docker-028000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0307 11:33:58.049390   20993 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-028000 offline-docker-028000
	I0307 11:33:58.134874   20993 network_create.go:108] docker network offline-docker-028000 192.168.94.0/24 created
	I0307 11:33:58.134910   20993 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-028000" container
	I0307 11:33:58.135018   20993 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 11:33:58.187097   20993 cli_runner.go:164] Run: docker volume create offline-docker-028000 --label name.minikube.sigs.k8s.io=offline-docker-028000 --label created_by.minikube.sigs.k8s.io=true
	I0307 11:33:58.236457   20993 oci.go:103] Successfully created a docker volume offline-docker-028000
	I0307 11:33:58.236578   20993 cli_runner.go:164] Run: docker run --rm --name offline-docker-028000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-028000 --entrypoint /usr/bin/test -v offline-docker-028000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 11:33:58.544195   20993 oci.go:107] Successfully prepared a docker volume offline-docker-028000
	I0307 11:33:58.544233   20993 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 11:33:58.544245   20993 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 11:33:58.544334   20993 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-028000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 11:39:57.775324   20993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:39:57.775455   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:39:57.826649   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:39:57.826776   20993 retry.go:31] will retry after 294.909299ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:39:58.121943   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:39:58.175023   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:39:58.175138   20993 retry.go:31] will retry after 349.417927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:39:58.526415   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:39:58.579283   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:39:58.579386   20993 retry.go:31] will retry after 291.016263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:39:58.872794   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:39:58.924766   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	W0307 11:39:58.924879   20993 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	
	W0307 11:39:58.924898   20993 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:39:58.924956   20993 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:39:58.925011   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:39:58.974475   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:39:58.974572   20993 retry.go:31] will retry after 207.045318ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:39:59.184053   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:39:59.235128   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:39:59.235234   20993 retry.go:31] will retry after 428.851288ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:39:59.664726   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:39:59.716186   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:39:59.716287   20993 retry.go:31] will retry after 785.510739ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:00.504286   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:00.618702   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	W0307 11:40:00.618850   20993 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	
	W0307 11:40:00.618869   20993 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:00.618879   20993 start.go:128] duration metric: took 6m2.866100528s to createHost
	I0307 11:40:00.618966   20993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:40:00.619032   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:00.669220   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:40:00.669312   20993 retry.go:31] will retry after 284.371947ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:00.954060   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:01.004239   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:40:01.004342   20993 retry.go:31] will retry after 394.660469ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:01.401418   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:01.457026   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:40:01.457123   20993 retry.go:31] will retry after 517.593877ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:01.976720   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:02.027184   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	W0307 11:40:02.027290   20993 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	
	W0307 11:40:02.027307   20993 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:02.027363   20993 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:40:02.027423   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:02.077359   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:40:02.077453   20993 retry.go:31] will retry after 128.452363ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:02.208272   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:02.259210   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:40:02.259306   20993 retry.go:31] will retry after 516.784349ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:02.776607   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:02.828047   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	I0307 11:40:02.828139   20993 retry.go:31] will retry after 640.380874ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:03.469291   20993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000
	W0307 11:40:03.520696   20993 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000 returned with exit code 1
	W0307 11:40:03.520802   20993 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	
	W0307 11:40:03.520825   20993 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-028000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-028000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000
	I0307 11:40:03.520841   20993 fix.go:56] duration metric: took 6m23.982884238s for fixHost
	I0307 11:40:03.520847   20993 start.go:83] releasing machines lock for "offline-docker-028000", held for 6m23.982935674s
	W0307 11:40:03.520926   20993 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-028000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-028000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0307 11:40:03.563406   20993 out.go:177] 
	W0307 11:40:03.585418   20993 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0307 11:40:03.585453   20993 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0307 11:40:03.585470   20993 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0307 11:40:03.608372   20993 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-028000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-03-07 11:40:03.703521 -0800 PST m=+6277.238274261
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-028000
helpers_test.go:235: (dbg) docker inspect offline-docker-028000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-028000",
	        "Id": "a5b24387b120b02bdf233b747519bf18183661230349fb203a44a4b301058855",
	        "Created": "2024-03-07T19:33:58.095156295Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-028000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-028000 -n offline-docker-028000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-028000 -n offline-docker-028000: exit status 7 (114.910399ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:40:03.869655   21864 status.go:249] status error: host: state: unknown state "offline-docker-028000": docker container inspect offline-docker-028000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-028000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-028000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-028000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-028000
--- FAIL: TestOffline (754.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (871.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-461000 ssh -- ls /minikube-host
E0307 10:25:32.913065    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:26:35.571983    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:27:58.624092    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:30:32.938628    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:31:35.598555    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:35:32.939927    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:36:35.600478    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-461000 ssh -- ls /minikube-host: signal: killed (14m30.982024685s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-461000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-461000
helpers_test.go:235: (dbg) docker inspect mount-start-2-461000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "379cda3ade1e50e0255738d85cf4f02748d3774f51ab793a84075675aa7215d2",
	        "Created": "2024-03-07T18:23:02.085884553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-07T18:23:13.602041962Z",
	            "FinishedAt": "2024-03-07T18:23:11.05385242Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/379cda3ade1e50e0255738d85cf4f02748d3774f51ab793a84075675aa7215d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/379cda3ade1e50e0255738d85cf4f02748d3774f51ab793a84075675aa7215d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/379cda3ade1e50e0255738d85cf4f02748d3774f51ab793a84075675aa7215d2/hosts",
	        "LogPath": "/var/lib/docker/containers/379cda3ade1e50e0255738d85cf4f02748d3774f51ab793a84075675aa7215d2/379cda3ade1e50e0255738d85cf4f02748d3774f51ab793a84075675aa7215d2-json.log",
	        "Name": "/mount-start-2-461000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-461000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-461000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/161f65777f70c9cdbad1ac5396b19776dfd4c751c0a81267af3a91f3c1e8c36f-init/diff:/var/lib/docker/overlay2/331cbbf7a4cd2b209b08e1fa6892006fb4fc56f4a09d2b810d9b96428f5193a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/161f65777f70c9cdbad1ac5396b19776dfd4c751c0a81267af3a91f3c1e8c36f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/161f65777f70c9cdbad1ac5396b19776dfd4c751c0a81267af3a91f3c1e8c36f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/161f65777f70c9cdbad1ac5396b19776dfd4c751c0a81267af3a91f3c1e8c36f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-461000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-461000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-461000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-461000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-461000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edd5c8e8f7cdca1acb7954b3593df1cd3ba5efcf2bece3932fa110bf32284643",
	            "SandboxKey": "/var/run/docker/netns/edd5c8e8f7cd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54560"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54561"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54562"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-461000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "379cda3ade1e",
	                        "mount-start-2-461000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "0c943b65dcf1f42c8d77ad8560ab0d7ccf15262865e08a04068d18d5647498cd",
	                    "EndpointID": "f59b963bc50e54cbf40033887942ba24d49f9297a9e1adc2f9b09255e666fa95",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-461000",
	                        "379cda3ade1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-461000 -n mount-start-2-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-461000 -n mount-start-2-461000: exit status 6 (387.144424ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:37:52.475206   17568 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-461000" does not appear in /Users/jenkins/minikube-integration/18239-8734/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-461000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (871.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (755.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-813000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0307 10:40:32.939775    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:41:35.600053    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:44:38.687905    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:45:32.986386    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:46:35.646928    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:50:32.990277    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:51:35.649504    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-813000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m35.760478146s)

                                                
                                                
-- stdout --
	* [multinode-813000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-813000" primary control-plane node in "multinode-813000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-813000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:39:01.521049   17702 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:39:01.521222   17702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:39:01.521227   17702 out.go:304] Setting ErrFile to fd 2...
	I0307 10:39:01.521231   17702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:39:01.521408   17702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:39:01.522840   17702 out.go:298] Setting JSON to false
	I0307 10:39:01.545483   17702 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5912,"bootTime":1709830829,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:39:01.545580   17702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:39:01.567233   17702 out.go:177] * [multinode-813000] minikube v1.32.0 on Darwin 14.3.1
	I0307 10:39:01.610105   17702 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 10:39:01.610151   17702 notify.go:220] Checking for updates...
	I0307 10:39:01.652917   17702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 10:39:01.673994   17702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:39:01.695128   17702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:39:01.717158   17702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	I0307 10:39:01.739862   17702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:39:01.761242   17702 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:39:01.815638   17702 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 10:39:01.815805   17702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 10:39:01.914148   17702 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-07 18:39:01.90391556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 10:39:01.957033   17702 out.go:177] * Using the docker driver based on user configuration
	I0307 10:39:01.978357   17702 start.go:297] selected driver: docker
	I0307 10:39:01.978386   17702 start.go:901] validating driver "docker" against <nil>
	I0307 10:39:01.978401   17702 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:39:01.982262   17702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 10:39:02.079331   17702 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-07 18:39:02.069751969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 10:39:02.079537   17702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:39:02.079722   17702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:39:02.100530   17702 out.go:177] * Using Docker Desktop driver with root privileges
	I0307 10:39:02.121736   17702 cni.go:84] Creating CNI manager for ""
	I0307 10:39:02.121766   17702 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0307 10:39:02.121778   17702 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 10:39:02.121877   17702 start.go:340] cluster config:
	{Name:multinode-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:39:02.143312   17702 out.go:177] * Starting "multinode-813000" primary control-plane node in "multinode-813000" cluster
	I0307 10:39:02.185608   17702 cache.go:121] Beginning downloading kic base image for docker with docker
	I0307 10:39:02.206403   17702 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 10:39:02.248460   17702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:39:02.248521   17702 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 10:39:02.248537   17702 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 10:39:02.248572   17702 cache.go:56] Caching tarball of preloaded images
	I0307 10:39:02.248768   17702 preload.go:173] Found /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:39:02.248788   17702 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:39:02.250440   17702 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/multinode-813000/config.json ...
	I0307 10:39:02.250552   17702 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/multinode-813000/config.json: {Name:mkb4fa1ca8a4999c41061247ad7aea3bcb1a1e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:39:02.299395   17702 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 10:39:02.299415   17702 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 10:39:02.299435   17702 cache.go:194] Successfully downloaded all kic artifacts
	I0307 10:39:02.299485   17702 start.go:360] acquireMachinesLock for multinode-813000: {Name:mk29a5ca7eade859f62bd0aa5a200d60c803f23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:39:02.299648   17702 start.go:364] duration metric: took 150.921µs to acquireMachinesLock for "multinode-813000"
	I0307 10:39:02.299674   17702 start.go:93] Provisioning new machine with config: &{Name:multinode-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-813000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:39:02.299732   17702 start.go:125] createHost starting for "" (driver="docker")
	I0307 10:39:02.341448   17702 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 10:39:02.341755   17702 start.go:159] libmachine.API.Create for "multinode-813000" (driver="docker")
	I0307 10:39:02.341791   17702 client.go:168] LocalClient.Create starting
	I0307 10:39:02.341919   17702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 10:39:02.341984   17702 main.go:141] libmachine: Decoding PEM data...
	I0307 10:39:02.342004   17702 main.go:141] libmachine: Parsing certificate...
	I0307 10:39:02.342102   17702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 10:39:02.342212   17702 main.go:141] libmachine: Decoding PEM data...
	I0307 10:39:02.342222   17702 main.go:141] libmachine: Parsing certificate...
	I0307 10:39:02.342956   17702 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 10:39:02.392659   17702 cli_runner.go:211] docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 10:39:02.392761   17702 network_create.go:281] running [docker network inspect multinode-813000] to gather additional debugging logs...
	I0307 10:39:02.392778   17702 cli_runner.go:164] Run: docker network inspect multinode-813000
	W0307 10:39:02.442644   17702 cli_runner.go:211] docker network inspect multinode-813000 returned with exit code 1
	I0307 10:39:02.442673   17702 network_create.go:284] error running [docker network inspect multinode-813000]: docker network inspect multinode-813000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-813000 not found
	I0307 10:39:02.442685   17702 network_create.go:286] output of [docker network inspect multinode-813000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-813000 not found
	
	** /stderr **
	I0307 10:39:02.442822   17702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 10:39:02.493558   17702 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 10:39:02.495129   17702 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 10:39:02.495501   17702 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000895610}
	I0307 10:39:02.495521   17702 network_create.go:124] attempt to create docker network multinode-813000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0307 10:39:02.495598   17702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000
	I0307 10:39:02.581202   17702 network_create.go:108] docker network multinode-813000 192.168.67.0/24 created
	I0307 10:39:02.581241   17702 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-813000" container
	I0307 10:39:02.581342   17702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 10:39:02.631139   17702 cli_runner.go:164] Run: docker volume create multinode-813000 --label name.minikube.sigs.k8s.io=multinode-813000 --label created_by.minikube.sigs.k8s.io=true
	I0307 10:39:02.680822   17702 oci.go:103] Successfully created a docker volume multinode-813000
	I0307 10:39:02.680938   17702 cli_runner.go:164] Run: docker run --rm --name multinode-813000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-813000 --entrypoint /usr/bin/test -v multinode-813000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 10:39:03.034038   17702 oci.go:107] Successfully prepared a docker volume multinode-813000
	I0307 10:39:03.034077   17702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:39:03.034090   17702 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 10:39:03.034191   17702 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-813000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 10:45:02.389303   17702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 10:45:02.389410   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:02.439482   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:45:02.439612   17702 retry.go:31] will retry after 331.330841ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:02.771316   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:02.823814   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:45:02.823913   17702 retry.go:31] will retry after 328.602199ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:03.152902   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:03.202281   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:45:03.202384   17702 retry.go:31] will retry after 728.892644ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:03.931764   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:03.981846   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 10:45:03.981965   17702 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 10:45:03.981986   17702 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:03.982043   17702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 10:45:03.982106   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:04.031137   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:45:04.031232   17702 retry.go:31] will retry after 175.849731ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:04.207278   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:04.256277   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:45:04.256365   17702 retry.go:31] will retry after 219.756193ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:04.476781   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:04.526041   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:45:04.526151   17702 retry.go:31] will retry after 449.17276ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:04.977754   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:05.028409   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:45:05.028508   17702 retry.go:31] will retry after 535.619853ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:05.566426   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:45:05.616161   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 10:45:05.616261   17702 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 10:45:05.616275   17702 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:05.616294   17702 start.go:128] duration metric: took 6m3.270419315s to createHost
	I0307 10:45:05.616301   17702 start.go:83] releasing machines lock for "multinode-813000", held for 6m3.270515387s
	W0307 10:45:05.616316   17702 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0307 10:45:05.616740   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:05.665734   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:05.665784   17702 delete.go:82] Unable to get host status for multinode-813000, assuming it has already been deleted: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	W0307 10:45:05.665862   17702 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0307 10:45:05.665873   17702 start.go:728] Will try again in 5 seconds ...
	I0307 10:45:10.666720   17702 start.go:360] acquireMachinesLock for multinode-813000: {Name:mk29a5ca7eade859f62bd0aa5a200d60c803f23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:45:10.666915   17702 start.go:364] duration metric: took 162.777µs to acquireMachinesLock for "multinode-813000"
	I0307 10:45:10.666960   17702 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:45:10.666971   17702 fix.go:54] fixHost starting: 
	I0307 10:45:10.667382   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:10.716371   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:10.716420   17702 fix.go:112] recreateIfNeeded on multinode-813000: state= err=unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:10.716442   17702 fix.go:117] machineExists: false. err=machine does not exist
	I0307 10:45:10.738079   17702 out.go:177] * docker "multinode-813000" container is missing, will recreate.
	I0307 10:45:10.781925   17702 delete.go:124] DEMOLISHING multinode-813000 ...
	I0307 10:45:10.782106   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:10.832307   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	W0307 10:45:10.832357   17702 stop.go:83] unable to get state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:10.832376   17702 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:10.832738   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:10.882321   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:10.882382   17702 delete.go:82] Unable to get host status for multinode-813000, assuming it has already been deleted: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:10.882479   17702 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-813000
	W0307 10:45:10.931408   17702 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-813000 returned with exit code 1
	I0307 10:45:10.931449   17702 kic.go:371] could not find the container multinode-813000 to remove it. will try anyways
	I0307 10:45:10.931527   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:10.979932   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	W0307 10:45:10.979974   17702 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:10.980051   17702 cli_runner.go:164] Run: docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0"
	W0307 10:45:11.028871   17702 cli_runner.go:211] docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0307 10:45:11.028909   17702 oci.go:650] error shutdown multinode-813000: docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:12.030139   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:12.080203   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:12.080248   17702 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:12.080258   17702 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:45:12.080281   17702 retry.go:31] will retry after 541.891787ms: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:12.622492   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:12.672243   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:12.672308   17702 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:12.672321   17702 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:45:12.672341   17702 retry.go:31] will retry after 988.234741ms: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:13.661466   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:13.711996   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:13.712043   17702 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:13.712059   17702 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:45:13.712085   17702 retry.go:31] will retry after 1.508250689s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:15.220833   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:15.271308   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:15.271353   17702 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:15.271365   17702 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:45:15.271391   17702 retry.go:31] will retry after 2.173761674s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:17.445725   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:17.496265   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:17.496324   17702 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:17.496337   17702 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:45:17.496358   17702 retry.go:31] will retry after 1.490906418s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:18.988102   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:19.040596   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:19.040639   17702 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:19.040650   17702 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:45:19.040676   17702 retry.go:31] will retry after 4.39177789s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:23.434970   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:23.485918   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:23.485963   17702 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:23.485974   17702 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:45:23.485994   17702 retry.go:31] will retry after 6.13301299s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:29.619858   17702 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:45:29.669752   17702 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:45:29.669794   17702 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:45:29.669805   17702 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:45:29.669836   17702 oci.go:88] couldn't shut down multinode-813000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	 
	I0307 10:45:29.669910   17702 cli_runner.go:164] Run: docker rm -f -v multinode-813000
	I0307 10:45:29.719244   17702 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-813000
	W0307 10:45:29.768353   17702 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-813000 returned with exit code 1
	I0307 10:45:29.768481   17702 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 10:45:29.818215   17702 cli_runner.go:164] Run: docker network rm multinode-813000
	I0307 10:45:29.932530   17702 fix.go:124] Sleeping 1 second for extra luck!
	I0307 10:45:30.933825   17702 start.go:125] createHost starting for "" (driver="docker")
	I0307 10:45:30.955724   17702 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 10:45:30.955863   17702 start.go:159] libmachine.API.Create for "multinode-813000" (driver="docker")
	I0307 10:45:30.955881   17702 client.go:168] LocalClient.Create starting
	I0307 10:45:30.956048   17702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 10:45:30.956121   17702 main.go:141] libmachine: Decoding PEM data...
	I0307 10:45:30.956139   17702 main.go:141] libmachine: Parsing certificate...
	I0307 10:45:30.956222   17702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 10:45:30.956269   17702 main.go:141] libmachine: Decoding PEM data...
	I0307 10:45:30.956278   17702 main.go:141] libmachine: Parsing certificate...
	I0307 10:45:30.977122   17702 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 10:45:31.031490   17702 cli_runner.go:211] docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 10:45:31.031573   17702 network_create.go:281] running [docker network inspect multinode-813000] to gather additional debugging logs...
	I0307 10:45:31.031586   17702 cli_runner.go:164] Run: docker network inspect multinode-813000
	W0307 10:45:31.081190   17702 cli_runner.go:211] docker network inspect multinode-813000 returned with exit code 1
	I0307 10:45:31.081220   17702 network_create.go:284] error running [docker network inspect multinode-813000]: docker network inspect multinode-813000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-813000 not found
	I0307 10:45:31.081231   17702 network_create.go:286] output of [docker network inspect multinode-813000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-813000 not found
	
	** /stderr **
	I0307 10:45:31.081368   17702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 10:45:31.134147   17702 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 10:45:31.135612   17702 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 10:45:31.136969   17702 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 10:45:31.137282   17702 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021e0000}
	I0307 10:45:31.137296   17702 network_create.go:124] attempt to create docker network multinode-813000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0307 10:45:31.137367   17702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000
	W0307 10:45:31.187174   17702 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000 returned with exit code 1
	W0307 10:45:31.187212   17702 network_create.go:149] failed to create docker network multinode-813000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0307 10:45:31.187232   17702 network_create.go:116] failed to create docker network multinode-813000 192.168.76.0/24, will retry: subnet is taken
	I0307 10:45:31.188863   17702 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 10:45:31.189347   17702 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021fe800}
	I0307 10:45:31.189361   17702 network_create.go:124] attempt to create docker network multinode-813000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0307 10:45:31.189426   17702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000
	I0307 10:45:31.273743   17702 network_create.go:108] docker network multinode-813000 192.168.85.0/24 created
	I0307 10:45:31.273778   17702 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-813000" container
	I0307 10:45:31.273886   17702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 10:45:31.324244   17702 cli_runner.go:164] Run: docker volume create multinode-813000 --label name.minikube.sigs.k8s.io=multinode-813000 --label created_by.minikube.sigs.k8s.io=true
	I0307 10:45:31.373781   17702 oci.go:103] Successfully created a docker volume multinode-813000
	I0307 10:45:31.373889   17702 cli_runner.go:164] Run: docker run --rm --name multinode-813000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-813000 --entrypoint /usr/bin/test -v multinode-813000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 10:45:31.687679   17702 oci.go:107] Successfully prepared a docker volume multinode-813000
	I0307 10:45:31.687722   17702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:45:31.687735   17702 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 10:45:31.687814   17702 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-813000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 10:51:30.959744   17702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 10:51:30.959876   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:31.010853   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:31.010967   17702 retry.go:31] will retry after 191.612945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:31.204366   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:31.253683   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:31.253795   17702 retry.go:31] will retry after 406.101701ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:31.661928   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:31.716855   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:31.716968   17702 retry.go:31] will retry after 446.429ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:32.165521   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:32.215892   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 10:51:32.216002   17702 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 10:51:32.216024   17702 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:32.216079   17702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 10:51:32.216138   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:32.264736   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:32.264831   17702 retry.go:31] will retry after 204.186152ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:32.471424   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:32.523935   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:32.524047   17702 retry.go:31] will retry after 376.252251ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:32.902701   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:32.953224   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:32.953323   17702 retry.go:31] will retry after 802.969932ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:33.756763   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:33.809656   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 10:51:33.809764   17702 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 10:51:33.809782   17702 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:33.809798   17702 start.go:128] duration metric: took 6m2.872706774s to createHost
	I0307 10:51:33.809864   17702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 10:51:33.809930   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:33.859846   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:33.859938   17702 retry.go:31] will retry after 345.349197ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:34.207606   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:34.256636   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:34.256730   17702 retry.go:31] will retry after 496.281255ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:34.753345   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:34.807131   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:34.807222   17702 retry.go:31] will retry after 661.510545ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:35.469893   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:35.520666   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 10:51:35.520770   17702 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 10:51:35.520784   17702 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:35.520842   17702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 10:51:35.520902   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:35.569550   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:35.569644   17702 retry.go:31] will retry after 353.537188ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:35.923782   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:35.974302   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:35.974407   17702 retry.go:31] will retry after 532.328775ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:36.507473   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:36.556914   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 10:51:36.557012   17702 retry.go:31] will retry after 512.427697ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:37.070004   17702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 10:51:37.120400   17702 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 10:51:37.120508   17702 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 10:51:37.120523   17702 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:51:37.120534   17702 fix.go:56] duration metric: took 6m26.449912564s for fixHost
	I0307 10:51:37.120540   17702 start.go:83] releasing machines lock for "multinode-813000", held for 6m26.449949865s
	W0307 10:51:37.120613   17702 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-813000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-813000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0307 10:51:37.163300   17702 out.go:177] 
	W0307 10:51:37.185357   17702 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0307 10:51:37.185426   17702 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0307 10:51:37.185450   17702 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0307 10:51:37.209067   17702 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-813000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (114.444947ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:51:37.447592   18359 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (755.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (76.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (100.894862ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-813000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- rollout status deployment/busybox: exit status 1 (100.309962ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.397085ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.757234ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.582146ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.383278ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.36185ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.735788ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.855425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.587493ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.311754ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.029476ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (98.946267ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- exec  -- nslookup kubernetes.io: exit status 1 (100.7059ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- exec  -- nslookup kubernetes.default: exit status 1 (100.978447ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (99.985151ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (113.565829ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:52:54.169954   18461 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (76.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-813000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (101.527735ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-813000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (113.545961ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:52:54.438991   18470 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-813000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-813000 -v 3 --alsologtostderr: exit status 80 (199.935776ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:52:54.501322   18474 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:52:54.501601   18474 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:54.501606   18474 out.go:304] Setting ErrFile to fd 2...
	I0307 10:52:54.501610   18474 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:54.501784   18474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:52:54.502109   18474 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:52:54.502373   18474 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:52:54.502755   18474 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:52:54.552504   18474 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:52:54.574746   18474 out.go:177] 
	W0307 10:52:54.596328   18474 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-813000 host status: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-813000 host status: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	I0307 10:52:54.617236   18474 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-813000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (113.646005ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:52:54.805563   18480 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-813000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-813000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (37.06918ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-813000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-813000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-813000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (115.134907ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:52:55.011185   18487 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-813000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-461000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-813000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-813000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"A
PIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-813000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.
4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":600000000
00},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (114.138849ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:52:55.365801   18501 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status --output json --alsologtostderr: exit status 7 (113.527584ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-813000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:52:55.428864   18505 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:52:55.429130   18505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:55.429135   18505 out.go:304] Setting ErrFile to fd 2...
	I0307 10:52:55.429139   18505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:55.429305   18505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:52:55.429479   18505 out.go:298] Setting JSON to true
	I0307 10:52:55.429503   18505 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:52:55.429538   18505 notify.go:220] Checking for updates...
	I0307 10:52:55.429767   18505 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:52:55.429785   18505 status.go:255] checking status of multinode-813000 ...
	I0307 10:52:55.430201   18505 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:52:55.479407   18505 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:52:55.479485   18505 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:52:55.479506   18505 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:52:55.479548   18505 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:52:55.479556   18505 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-813000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (114.056265ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:52:55.646309   18513 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 node stop m03: exit status 85 (155.550391ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-813000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status: exit status 7 (115.243716ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:52:55.917695   18519 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:52:55.917708   18519 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr: exit status 7 (114.044249ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:52:55.980855   18523 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:52:55.981039   18523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:55.981044   18523 out.go:304] Setting ErrFile to fd 2...
	I0307 10:52:55.981048   18523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:55.981237   18523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:52:55.981414   18523 out.go:298] Setting JSON to false
	I0307 10:52:55.981434   18523 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:52:55.981474   18523 notify.go:220] Checking for updates...
	I0307 10:52:55.981697   18523 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:52:55.981714   18523 status.go:255] checking status of multinode-813000 ...
	I0307 10:52:55.982094   18523 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:52:56.031715   18523 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:52:56.031782   18523 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:52:56.031810   18523 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:52:56.031832   18523 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:52:56.031838   18523 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr": multinode-813000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr": multinode-813000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr": multinode-813000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (114.024593ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:52:56.198508   18529 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 node start m03 -v=7 --alsologtostderr: exit status 85 (154.712741ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:52:56.262093   18533 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:52:56.262371   18533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:56.262377   18533 out.go:304] Setting ErrFile to fd 2...
	I0307 10:52:56.262380   18533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:56.262560   18533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:52:56.262887   18533 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:52:56.263154   18533 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:52:56.284262   18533 out.go:177] 
	W0307 10:52:56.306299   18533 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0307 10:52:56.306329   18533 out.go:239] * 
	* 
	W0307 10:52:56.310922   18533 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:52:56.332097   18533 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0307 10:52:56.262093   18533 out.go:291] Setting OutFile to fd 1 ...
I0307 10:52:56.262371   18533 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:52:56.262377   18533 out.go:304] Setting ErrFile to fd 2...
I0307 10:52:56.262380   18533 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:52:56.262560   18533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
I0307 10:52:56.262887   18533 mustload.go:65] Loading cluster: multinode-813000
I0307 10:52:56.263154   18533 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:52:56.284262   18533 out.go:177] 
W0307 10:52:56.306299   18533 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0307 10:52:56.306329   18533 out.go:239] * 
* 
W0307 10:52:56.310922   18533 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 10:52:56.332097   18533 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-813000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (113.691653ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:52:56.416076   18535 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:52:56.416249   18535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:56.416254   18535 out.go:304] Setting ErrFile to fd 2...
	I0307 10:52:56.416258   18535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:56.416434   18535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:52:56.416609   18535 out.go:298] Setting JSON to false
	I0307 10:52:56.416634   18535 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:52:56.416671   18535 notify.go:220] Checking for updates...
	I0307 10:52:56.417917   18535 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:52:56.417938   18535 status.go:255] checking status of multinode-813000 ...
	I0307 10:52:56.418302   18535 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:52:56.467222   18535 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:52:56.467302   18535 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:52:56.467323   18535 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:52:56.467347   18535 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:52:56.467354   18535 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (114.970911ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:52:57.191978   18541 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:52:57.192258   18541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:57.192263   18541 out.go:304] Setting ErrFile to fd 2...
	I0307 10:52:57.192267   18541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:57.192448   18541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:52:57.192619   18541 out.go:298] Setting JSON to false
	I0307 10:52:57.192641   18541 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:52:57.192685   18541 notify.go:220] Checking for updates...
	I0307 10:52:57.193935   18541 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:52:57.193960   18541 status.go:255] checking status of multinode-813000 ...
	I0307 10:52:57.194394   18541 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:52:57.243444   18541 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:52:57.243521   18541 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:52:57.243542   18541 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:52:57.243564   18541 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:52:57.243573   18541 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (116.368238ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:52:58.382207   18545 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:52:58.382388   18545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:58.382394   18545 out.go:304] Setting ErrFile to fd 2...
	I0307 10:52:58.382397   18545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:58.382584   18545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:52:58.382769   18545 out.go:298] Setting JSON to false
	I0307 10:52:58.382793   18545 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:52:58.382830   18545 notify.go:220] Checking for updates...
	I0307 10:52:58.383059   18545 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:52:58.383078   18545 status.go:255] checking status of multinode-813000 ...
	I0307 10:52:58.383518   18545 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:52:58.433772   18545 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:52:58.433852   18545 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:52:58.433873   18545 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:52:58.433897   18545 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:52:58.433918   18545 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (114.512849ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:53:01.795366   18552 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:53:01.795615   18552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:01.795620   18552 out.go:304] Setting ErrFile to fd 2...
	I0307 10:53:01.795624   18552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:01.795794   18552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:53:01.795970   18552 out.go:298] Setting JSON to false
	I0307 10:53:01.795992   18552 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:53:01.796031   18552 notify.go:220] Checking for updates...
	I0307 10:53:01.796285   18552 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:53:01.796303   18552 status.go:255] checking status of multinode-813000 ...
	I0307 10:53:01.796736   18552 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:01.846367   18552 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:53:01.846433   18552 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:53:01.846458   18552 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:53:01.846482   18552 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:53:01.846489   18552 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (118.915687ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:53:05.336185   18556 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:53:05.336352   18556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:05.336358   18556 out.go:304] Setting ErrFile to fd 2...
	I0307 10:53:05.336361   18556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:05.336545   18556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:53:05.336719   18556 out.go:298] Setting JSON to false
	I0307 10:53:05.336740   18556 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:53:05.336777   18556 notify.go:220] Checking for updates...
	I0307 10:53:05.337011   18556 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:53:05.337028   18556 status.go:255] checking status of multinode-813000 ...
	I0307 10:53:05.337402   18556 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:05.387410   18556 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:53:05.387501   18556 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:53:05.387522   18556 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:53:05.387547   18556 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:53:05.387555   18556 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (114.416489ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:53:12.326100   18564 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:53:12.326379   18564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:12.326385   18564 out.go:304] Setting ErrFile to fd 2...
	I0307 10:53:12.326388   18564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:12.327037   18564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:53:12.327362   18564 out.go:298] Setting JSON to false
	I0307 10:53:12.327389   18564 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:53:12.327474   18564 notify.go:220] Checking for updates...
	I0307 10:53:12.327939   18564 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:53:12.327960   18564 status.go:255] checking status of multinode-813000 ...
	I0307 10:53:12.328325   18564 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:12.377881   18564 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:53:12.377947   18564 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:53:12.377971   18564 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:53:12.377992   18564 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:53:12.378003   18564 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (116.554023ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:53:21.387509   18574 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:53:21.387777   18574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:21.387783   18574 out.go:304] Setting ErrFile to fd 2...
	I0307 10:53:21.387787   18574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:21.387958   18574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:53:21.388135   18574 out.go:298] Setting JSON to false
	I0307 10:53:21.388159   18574 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:53:21.388193   18574 notify.go:220] Checking for updates...
	I0307 10:53:21.388423   18574 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:53:21.388441   18574 status.go:255] checking status of multinode-813000 ...
	I0307 10:53:21.388819   18574 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:21.439154   18574 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:53:21.439212   18574 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:53:21.439237   18574 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:53:21.439261   18574 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:53:21.439270   18574 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (116.698909ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:53:33.484441   18593 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:53:33.485068   18593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:33.485080   18593 out.go:304] Setting ErrFile to fd 2...
	I0307 10:53:33.485086   18593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:33.485674   18593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:53:33.485881   18593 out.go:298] Setting JSON to false
	I0307 10:53:33.485906   18593 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:53:33.485949   18593 notify.go:220] Checking for updates...
	I0307 10:53:33.486175   18593 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:53:33.486193   18593 status.go:255] checking status of multinode-813000 ...
	I0307 10:53:33.486587   18593 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:33.536703   18593 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:53:33.536774   18593 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:53:33.536811   18593 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:53:33.536835   18593 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:53:33.536843   18593 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr: exit status 7 (116.189729ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:53:45.169732   18608 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:53:45.169920   18608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:45.169926   18608 out.go:304] Setting ErrFile to fd 2...
	I0307 10:53:45.169930   18608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:45.170114   18608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:53:45.170288   18608 out.go:298] Setting JSON to false
	I0307 10:53:45.170314   18608 mustload.go:65] Loading cluster: multinode-813000
	I0307 10:53:45.170350   18608 notify.go:220] Checking for updates...
	I0307 10:53:45.170579   18608 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:53:45.170597   18608 status.go:255] checking status of multinode-813000 ...
	I0307 10:53:45.172020   18608 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:45.222507   18608 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:53:45.222578   18608 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 10:53:45.222599   18608 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 10:53:45.222623   18608 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 10:53:45.222631   18608 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-813000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "6cd7a5ed795c3140f2e49493a756998adb8b9e63743dabe9df7066f986a74d34",
	        "Created": "2024-03-07T18:45:31.23453857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (113.402228ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:53:45.389023   18614 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (792.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-813000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-813000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-813000: exit status 82 (13.042223826s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-813000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-813000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-813000 --wait=true -v=8 --alsologtostderr
E0307 10:55:16.041868    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:55:32.991871    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:56:35.653232    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 11:00:33.019731    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:01:18.724826    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 11:01:35.680285    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 11:05:33.022371    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:06:35.684626    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-813000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m59.069109217s)

                                                
                                                
-- stdout --
	* [multinode-813000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-813000" primary control-plane node in "multinode-813000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* docker "multinode-813000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-813000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:53:58.559697   18638 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:53:58.559883   18638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:58.559888   18638 out.go:304] Setting ErrFile to fd 2...
	I0307 10:53:58.559892   18638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:53:58.560078   18638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:53:58.561608   18638 out.go:298] Setting JSON to false
	I0307 10:53:58.583772   18638 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6809,"bootTime":1709830829,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:53:58.583876   18638 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:53:58.605983   18638 out.go:177] * [multinode-813000] minikube v1.32.0 on Darwin 14.3.1
	I0307 10:53:58.647912   18638 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 10:53:58.647954   18638 notify.go:220] Checking for updates...
	I0307 10:53:58.691779   18638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 10:53:58.712848   18638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:53:58.735004   18638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:53:58.777538   18638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	I0307 10:53:58.798786   18638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:53:58.820636   18638 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:53:58.820812   18638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:53:58.876661   18638 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 10:53:58.876835   18638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 10:53:58.976990   18638 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:135 SystemTime:2024-03-07 18:53:58.966459808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 10:53:58.998668   18638 out.go:177] * Using the docker driver based on existing profile
	I0307 10:53:59.020632   18638 start.go:297] selected driver: docker
	I0307 10:53:59.020663   18638 start.go:901] validating driver "docker" against &{Name:multinode-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-813000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:53:59.020809   18638 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:53:59.021017   18638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 10:53:59.120342   18638 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:135 SystemTime:2024-03-07 18:53:59.110436201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 10:53:59.123331   18638 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:53:59.123396   18638 cni.go:84] Creating CNI manager for ""
	I0307 10:53:59.123404   18638 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 10:53:59.123475   18638 start.go:340] cluster config:
	{Name:multinode-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:53:59.166572   18638 out.go:177] * Starting "multinode-813000" primary control-plane node in "multinode-813000" cluster
	I0307 10:53:59.187688   18638 cache.go:121] Beginning downloading kic base image for docker with docker
	I0307 10:53:59.209680   18638 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 10:53:59.251812   18638 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:53:59.251861   18638 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 10:53:59.251890   18638 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 10:53:59.251907   18638 cache.go:56] Caching tarball of preloaded images
	I0307 10:53:59.252114   18638 preload.go:173] Found /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:53:59.252133   18638 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:53:59.252945   18638 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/multinode-813000/config.json ...
	I0307 10:53:59.302053   18638 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 10:53:59.302080   18638 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 10:53:59.302104   18638 cache.go:194] Successfully downloaded all kic artifacts
	I0307 10:53:59.302144   18638 start.go:360] acquireMachinesLock for multinode-813000: {Name:mk29a5ca7eade859f62bd0aa5a200d60c803f23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:53:59.302235   18638 start.go:364] duration metric: took 70.633µs to acquireMachinesLock for "multinode-813000"
	I0307 10:53:59.302257   18638 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:53:59.302265   18638 fix.go:54] fixHost starting: 
	I0307 10:53:59.302555   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:59.350898   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:53:59.350962   18638 fix.go:112] recreateIfNeeded on multinode-813000: state= err=unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:53:59.351001   18638 fix.go:117] machineExists: false. err=machine does not exist
	I0307 10:53:59.372458   18638 out.go:177] * docker "multinode-813000" container is missing, will recreate.
	I0307 10:53:59.414615   18638 delete.go:124] DEMOLISHING multinode-813000 ...
	I0307 10:53:59.414798   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:59.465442   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	W0307 10:53:59.465500   18638 stop.go:83] unable to get state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:53:59.465517   18638 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:53:59.465873   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:59.515078   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:53:59.515140   18638 delete.go:82] Unable to get host status for multinode-813000, assuming it has already been deleted: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:53:59.515236   18638 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-813000
	W0307 10:53:59.564630   18638 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-813000 returned with exit code 1
	I0307 10:53:59.564662   18638 kic.go:371] could not find the container multinode-813000 to remove it. will try anyways
	I0307 10:53:59.564730   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:53:59.613374   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	W0307 10:53:59.613421   18638 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:53:59.613496   18638 cli_runner.go:164] Run: docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0"
	W0307 10:53:59.662775   18638 cli_runner.go:211] docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0307 10:53:59.662804   18638 oci.go:650] error shutdown multinode-813000: docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:00.664482   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:54:00.716019   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:54:00.716068   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:00.716078   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:54:00.716118   18638 retry.go:31] will retry after 469.532853ms: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:01.185909   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:54:01.239156   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:54:01.239200   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:01.239216   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:54:01.239243   18638 retry.go:31] will retry after 867.938664ms: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:02.107506   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:54:02.158453   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:54:02.158498   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:02.158508   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:54:02.158533   18638 retry.go:31] will retry after 1.32872451s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:03.488070   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:54:03.538830   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:54:03.538875   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:03.538885   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:54:03.538909   18638 retry.go:31] will retry after 2.372858134s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:05.914174   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:54:05.964808   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:54:05.964854   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:05.964865   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:54:05.964903   18638 retry.go:31] will retry after 1.672885326s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:07.638854   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:54:07.688708   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:54:07.688751   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:07.688765   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:54:07.688791   18638 retry.go:31] will retry after 2.240491167s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:09.931658   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:54:09.982020   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:54:09.982066   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:09.982078   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:54:09.982101   18638 retry.go:31] will retry after 7.636234028s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:17.620371   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 10:54:17.671272   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 10:54:17.671314   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 10:54:17.671323   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 10:54:17.671352   18638 oci.go:88] couldn't shut down multinode-813000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	 
	I0307 10:54:17.671428   18638 cli_runner.go:164] Run: docker rm -f -v multinode-813000
	I0307 10:54:17.720745   18638 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-813000
	W0307 10:54:17.770000   18638 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-813000 returned with exit code 1
	I0307 10:54:17.770114   18638 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 10:54:17.820271   18638 cli_runner.go:164] Run: docker network rm multinode-813000
	I0307 10:54:17.923233   18638 fix.go:124] Sleeping 1 second for extra luck!
	I0307 10:54:18.923847   18638 start.go:125] createHost starting for "" (driver="docker")
	I0307 10:54:18.945987   18638 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 10:54:18.946238   18638 start.go:159] libmachine.API.Create for "multinode-813000" (driver="docker")
	I0307 10:54:18.946286   18638 client.go:168] LocalClient.Create starting
	I0307 10:54:18.946500   18638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 10:54:18.946573   18638 main.go:141] libmachine: Decoding PEM data...
	I0307 10:54:18.946601   18638 main.go:141] libmachine: Parsing certificate...
	I0307 10:54:18.946683   18638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 10:54:18.946738   18638 main.go:141] libmachine: Decoding PEM data...
	I0307 10:54:18.946750   18638 main.go:141] libmachine: Parsing certificate...
	I0307 10:54:18.968580   18638 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 10:54:19.019475   18638 cli_runner.go:211] docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 10:54:19.019569   18638 network_create.go:281] running [docker network inspect multinode-813000] to gather additional debugging logs...
	I0307 10:54:19.019586   18638 cli_runner.go:164] Run: docker network inspect multinode-813000
	W0307 10:54:19.068739   18638 cli_runner.go:211] docker network inspect multinode-813000 returned with exit code 1
	I0307 10:54:19.068770   18638 network_create.go:284] error running [docker network inspect multinode-813000]: docker network inspect multinode-813000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-813000 not found
	I0307 10:54:19.068780   18638 network_create.go:286] output of [docker network inspect multinode-813000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-813000 not found
	
	** /stderr **
	I0307 10:54:19.068912   18638 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 10:54:19.120402   18638 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 10:54:19.121991   18638 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 10:54:19.122356   18638 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023e0fe0}
	I0307 10:54:19.122375   18638 network_create.go:124] attempt to create docker network multinode-813000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0307 10:54:19.122442   18638 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000
	I0307 10:54:19.207784   18638 network_create.go:108] docker network multinode-813000 192.168.67.0/24 created
	I0307 10:54:19.207826   18638 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-813000" container
	I0307 10:54:19.207929   18638 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 10:54:19.257836   18638 cli_runner.go:164] Run: docker volume create multinode-813000 --label name.minikube.sigs.k8s.io=multinode-813000 --label created_by.minikube.sigs.k8s.io=true
	I0307 10:54:19.307270   18638 oci.go:103] Successfully created a docker volume multinode-813000
	I0307 10:54:19.307381   18638 cli_runner.go:164] Run: docker run --rm --name multinode-813000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-813000 --entrypoint /usr/bin/test -v multinode-813000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 10:54:19.597222   18638 oci.go:107] Successfully prepared a docker volume multinode-813000
	I0307 10:54:19.597259   18638 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:54:19.597272   18638 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 10:54:19.597369   18638 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-813000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 11:00:18.974985   18638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:00:18.975060   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:19.027886   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:19.028008   18638 retry.go:31] will retry after 210.930525ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:19.239390   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:19.291316   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:19.291432   18638 retry.go:31] will retry after 389.563949ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:19.682463   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:19.733327   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:19.733426   18638 retry.go:31] will retry after 580.76641ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:20.315642   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:20.366544   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:20.366658   18638 retry.go:31] will retry after 449.818819ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:20.817189   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:20.870316   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 11:00:20.870420   18638 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 11:00:20.870441   18638 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:20.870514   18638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:00:20.870566   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:20.920179   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:20.920280   18638 retry.go:31] will retry after 358.619103ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:21.281393   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:21.332293   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:21.332392   18638 retry.go:31] will retry after 369.018262ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:21.703558   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:21.752794   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:21.752897   18638 retry.go:31] will retry after 629.535355ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:22.382749   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:22.436276   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 11:00:22.436383   18638 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 11:00:22.436398   18638 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:22.436415   18638 start.go:128] duration metric: took 6m3.483949738s to createHost
	I0307 11:00:22.436481   18638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:00:22.436545   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:22.487248   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:22.487340   18638 retry.go:31] will retry after 270.170122ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:22.759677   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:22.809303   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:22.809393   18638 retry.go:31] will retry after 191.688722ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:23.001426   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:23.052734   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:23.052835   18638 retry.go:31] will retry after 409.51728ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:23.464713   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:23.515973   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:23.516077   18638 retry.go:31] will retry after 917.368497ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:24.433669   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:24.483429   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 11:00:24.483528   18638 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 11:00:24.483548   18638 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:24.483604   18638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:00:24.483660   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:24.533503   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:24.533598   18638 retry.go:31] will retry after 217.031375ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:24.751652   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:24.803527   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:24.803620   18638 retry.go:31] will retry after 530.16751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:25.335541   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:25.388401   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:00:25.388502   18638 retry.go:31] will retry after 721.567379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:26.112449   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:00:26.163672   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 11:00:26.163768   18638 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 11:00:26.163783   18638 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:26.163794   18638 fix.go:56] duration metric: took 6m26.832748729s for fixHost
	I0307 11:00:26.163801   18638 start.go:83] releasing machines lock for "multinode-813000", held for 6m26.832777995s
	W0307 11:00:26.163817   18638 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0307 11:00:26.163885   18638 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0307 11:00:26.163892   18638 start.go:728] Will try again in 5 seconds ...
	I0307 11:00:31.164950   18638 start.go:360] acquireMachinesLock for multinode-813000: {Name:mk29a5ca7eade859f62bd0aa5a200d60c803f23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 11:00:31.165144   18638 start.go:364] duration metric: took 152.377µs to acquireMachinesLock for "multinode-813000"
	I0307 11:00:31.165187   18638 start.go:96] Skipping create...Using existing machine configuration
	I0307 11:00:31.165198   18638 fix.go:54] fixHost starting: 
	I0307 11:00:31.165687   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:31.216237   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:31.216281   18638 fix.go:112] recreateIfNeeded on multinode-813000: state= err=unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:31.216300   18638 fix.go:117] machineExists: false. err=machine does not exist
	I0307 11:00:31.237961   18638 out.go:177] * docker "multinode-813000" container is missing, will recreate.
	I0307 11:00:31.279668   18638 delete.go:124] DEMOLISHING multinode-813000 ...
	I0307 11:00:31.279816   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:31.353293   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	W0307 11:00:31.353342   18638 stop.go:83] unable to get state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:31.353358   18638 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:31.353709   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:31.402447   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:31.402495   18638 delete.go:82] Unable to get host status for multinode-813000, assuming it has already been deleted: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:31.402572   18638 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-813000
	W0307 11:00:31.452450   18638 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-813000 returned with exit code 1
	I0307 11:00:31.452483   18638 kic.go:371] could not find the container multinode-813000 to remove it. will try anyways
	I0307 11:00:31.452550   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:31.502321   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	W0307 11:00:31.502367   18638 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:31.502442   18638 cli_runner.go:164] Run: docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0"
	W0307 11:00:31.552031   18638 cli_runner.go:211] docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0307 11:00:31.552061   18638 oci.go:650] error shutdown multinode-813000: docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:32.553014   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:32.604959   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:32.605005   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:32.605015   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:00:32.605051   18638 retry.go:31] will retry after 370.724406ms: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:32.976150   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:33.026066   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:33.026117   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:33.026128   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:00:33.026151   18638 retry.go:31] will retry after 413.653233ms: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:33.442151   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:33.494258   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:33.494301   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:33.494314   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:00:33.494337   18638 retry.go:31] will retry after 1.333179622s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:34.828617   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:34.878310   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:34.878361   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:34.878370   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:00:34.878391   18638 retry.go:31] will retry after 1.063717635s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:35.944449   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:35.997116   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:35.997163   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:35.997173   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:00:35.997199   18638 retry.go:31] will retry after 1.608645211s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:37.608099   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:37.657943   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:37.657988   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:37.657996   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:00:37.658026   18638 retry.go:31] will retry after 3.818273172s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:41.477891   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:41.528955   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:41.528999   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:41.529007   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:00:41.529033   18638 retry.go:31] will retry after 8.280737987s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:49.810529   18638 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:00:49.861460   18638 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:00:49.861506   18638 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:00:49.861515   18638 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:00:49.861547   18638 oci.go:88] couldn't shut down multinode-813000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	 
	I0307 11:00:49.861614   18638 cli_runner.go:164] Run: docker rm -f -v multinode-813000
	I0307 11:00:49.911314   18638 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-813000
	W0307 11:00:49.960179   18638 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-813000 returned with exit code 1
	I0307 11:00:49.960283   18638 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:00:50.009905   18638 cli_runner.go:164] Run: docker network rm multinode-813000
	I0307 11:00:50.119158   18638 fix.go:124] Sleeping 1 second for extra luck!
	I0307 11:00:51.119267   18638 start.go:125] createHost starting for "" (driver="docker")
	I0307 11:00:51.142184   18638 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 11:00:51.142376   18638 start.go:159] libmachine.API.Create for "multinode-813000" (driver="docker")
	I0307 11:00:51.142405   18638 client.go:168] LocalClient.Create starting
	I0307 11:00:51.142622   18638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 11:00:51.142715   18638 main.go:141] libmachine: Decoding PEM data...
	I0307 11:00:51.142742   18638 main.go:141] libmachine: Parsing certificate...
	I0307 11:00:51.142823   18638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 11:00:51.142900   18638 main.go:141] libmachine: Decoding PEM data...
	I0307 11:00:51.142916   18638 main.go:141] libmachine: Parsing certificate...
	I0307 11:00:51.164417   18638 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 11:00:51.216437   18638 cli_runner.go:211] docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 11:00:51.216525   18638 network_create.go:281] running [docker network inspect multinode-813000] to gather additional debugging logs...
	I0307 11:00:51.216544   18638 cli_runner.go:164] Run: docker network inspect multinode-813000
	W0307 11:00:51.265991   18638 cli_runner.go:211] docker network inspect multinode-813000 returned with exit code 1
	I0307 11:00:51.266023   18638 network_create.go:284] error running [docker network inspect multinode-813000]: docker network inspect multinode-813000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-813000 not found
	I0307 11:00:51.266034   18638 network_create.go:286] output of [docker network inspect multinode-813000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-813000 not found
	
	** /stderr **
	I0307 11:00:51.266198   18638 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:00:51.317034   18638 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:00:51.318428   18638 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:00:51.319743   18638 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:00:51.320109   18638 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000129c0}
	I0307 11:00:51.320124   18638 network_create.go:124] attempt to create docker network multinode-813000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0307 11:00:51.320201   18638 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000
	W0307 11:00:51.369941   18638 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000 returned with exit code 1
	W0307 11:00:51.369978   18638 network_create.go:149] failed to create docker network multinode-813000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0307 11:00:51.370001   18638 network_create.go:116] failed to create docker network multinode-813000 192.168.76.0/24, will retry: subnet is taken
	I0307 11:00:51.371557   18638 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:00:51.371952   18638 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002580d60}
	I0307 11:00:51.371964   18638 network_create.go:124] attempt to create docker network multinode-813000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0307 11:00:51.372035   18638 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000
	I0307 11:00:51.457209   18638 network_create.go:108] docker network multinode-813000 192.168.85.0/24 created
	I0307 11:00:51.457243   18638 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-813000" container
	I0307 11:00:51.457363   18638 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 11:00:51.507380   18638 cli_runner.go:164] Run: docker volume create multinode-813000 --label name.minikube.sigs.k8s.io=multinode-813000 --label created_by.minikube.sigs.k8s.io=true
	I0307 11:00:51.556690   18638 oci.go:103] Successfully created a docker volume multinode-813000
	I0307 11:00:51.556808   18638 cli_runner.go:164] Run: docker run --rm --name multinode-813000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-813000 --entrypoint /usr/bin/test -v multinode-813000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 11:00:51.847755   18638 oci.go:107] Successfully prepared a docker volume multinode-813000
	I0307 11:00:51.847800   18638 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 11:00:51.847813   18638 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 11:00:51.847919   18638 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-813000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 11:06:51.146461   18638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:06:51.146587   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:51.197083   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:51.197203   18638 retry.go:31] will retry after 154.42833ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:51.351900   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:51.401438   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:51.401551   18638 retry.go:31] will retry after 448.346173ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:51.852345   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:51.902776   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:51.902884   18638 retry.go:31] will retry after 426.598999ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:52.330583   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:52.380982   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 11:06:52.381089   18638 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 11:06:52.381109   18638 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:52.381171   18638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:06:52.381226   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:52.430520   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:52.430614   18638 retry.go:31] will retry after 265.90496ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:52.697436   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:52.747578   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:52.747684   18638 retry.go:31] will retry after 410.938246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:53.160937   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:53.210342   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:53.210462   18638 retry.go:31] will retry after 357.123485ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:53.567904   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:53.619669   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 11:06:53.619772   18638 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 11:06:53.619796   18638 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:53.619814   18638 start.go:128] duration metric: took 6m2.497340574s to createHost
	I0307 11:06:53.619877   18638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:06:53.619940   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:53.669085   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:53.669180   18638 retry.go:31] will retry after 125.909916ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:53.795518   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:53.846029   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:53.846132   18638 retry.go:31] will retry after 437.018323ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:54.285450   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:54.335725   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:54.335835   18638 retry.go:31] will retry after 315.334641ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:54.653404   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:54.703707   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:54.703802   18638 retry.go:31] will retry after 656.195678ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:55.361104   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:55.412522   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 11:06:55.412629   18638 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 11:06:55.412649   18638 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:55.412706   18638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:06:55.412761   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:55.507472   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:55.507570   18638 retry.go:31] will retry after 181.928792ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:55.691872   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:55.744551   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:55.744648   18638 retry.go:31] will retry after 262.131019ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:56.007223   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:56.059852   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:56.059958   18638 retry.go:31] will retry after 671.923759ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:56.734259   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:56.786434   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	I0307 11:06:56.786535   18638 retry.go:31] will retry after 612.225275ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:57.399136   18638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000
	W0307 11:06:57.449882   18638 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000 returned with exit code 1
	W0307 11:06:57.449984   18638 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	W0307 11:06:57.449998   18638 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-813000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:06:57.450013   18638 fix.go:56] duration metric: took 6m26.281384754s for fixHost
	I0307 11:06:57.450020   18638 start.go:83] releasing machines lock for "multinode-813000", held for 6m26.281429909s
	W0307 11:06:57.450099   18638 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-813000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-813000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0307 11:06:57.493412   18638 out.go:177] 
	W0307 11:06:57.514888   18638 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0307 11:06:57.514943   18638 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0307 11:06:57.514983   18638 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0307 11:06:57.536505   18638 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-813000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-813000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "7162d60bc32dd64f91b943a10e6abe889a96c353162c8491b358a2ba0409b95a",
	        "Created": "2024-03-07T19:00:51.418452235Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (114.637079ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:06:57.846074   19464 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (792.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 node delete m03: exit status 80 (201.766522ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-813000 host status: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-813000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr: exit status 7 (115.780977ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 11:06:58.112294   19472 out.go:291] Setting OutFile to fd 1 ...
	I0307 11:06:58.112967   19472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:06:58.112975   19472 out.go:304] Setting ErrFile to fd 2...
	I0307 11:06:58.112981   19472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:06:58.113518   19472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 11:06:58.113715   19472 out.go:298] Setting JSON to false
	I0307 11:06:58.113742   19472 mustload.go:65] Loading cluster: multinode-813000
	I0307 11:06:58.113793   19472 notify.go:220] Checking for updates...
	I0307 11:06:58.114027   19472 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 11:06:58.114043   19472 status.go:255] checking status of multinode-813000 ...
	I0307 11:06:58.114417   19472 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:06:58.163897   19472 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:06:58.163969   19472 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 11:06:58.163998   19472 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 11:06:58.164023   19472 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 11:06:58.164032   19472 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "7162d60bc32dd64f91b943a10e6abe889a96c353162c8491b358a2ba0409b95a",
	        "Created": "2024-03-07T19:00:51.418452235Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (114.604217ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:06:58.331059   19478 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (14.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 stop: exit status 82 (13.90357378s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	* Stopping node "multinode-813000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-813000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-813000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status: exit status 7 (114.255263ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:07:12.349424   19508 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 11:07:12.349438   19508 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr: exit status 7 (114.818141ms)

                                                
                                                
-- stdout --
	multinode-813000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 11:07:12.413055   19512 out.go:291] Setting OutFile to fd 1 ...
	I0307 11:07:12.413310   19512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:07:12.413316   19512 out.go:304] Setting ErrFile to fd 2...
	I0307 11:07:12.413320   19512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:07:12.413500   19512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 11:07:12.413675   19512 out.go:298] Setting JSON to false
	I0307 11:07:12.413696   19512 mustload.go:65] Loading cluster: multinode-813000
	I0307 11:07:12.413737   19512 notify.go:220] Checking for updates...
	I0307 11:07:12.413969   19512 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 11:07:12.413987   19512 status.go:255] checking status of multinode-813000 ...
	I0307 11:07:12.414376   19512 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:12.464231   19512 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:12.464299   19512 status.go:330] multinode-813000 host status = "" (err=state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	)
	I0307 11:07:12.464324   19512 status.go:257] multinode-813000 status: &{Name:multinode-813000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 11:07:12.464344   19512 status.go:260] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	E0307 11:07:12.464351   19512 status.go:263] The "multinode-813000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr": multinode-813000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-813000 status --alsologtostderr": multinode-813000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "7162d60bc32dd64f91b943a10e6abe889a96c353162c8491b358a2ba0409b95a",
	        "Created": "2024-03-07T19:00:51.418452235Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (114.157077ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:07:12.631306   19518 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (14.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (109.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-813000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-813000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m48.915144798s)

                                                
                                                
-- stdout --
	* [multinode-813000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-813000" primary control-plane node in "multinode-813000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* docker "multinode-813000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 11:07:12.693880   19524 out.go:291] Setting OutFile to fd 1 ...
	I0307 11:07:12.694060   19524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:07:12.694065   19524 out.go:304] Setting ErrFile to fd 2...
	I0307 11:07:12.694069   19524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:07:12.694259   19524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 11:07:12.695671   19524 out.go:298] Setting JSON to false
	I0307 11:07:12.717867   19524 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7603,"bootTime":1709830829,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 11:07:12.717953   19524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 11:07:12.740157   19524 out.go:177] * [multinode-813000] minikube v1.32.0 on Darwin 14.3.1
	I0307 11:07:12.803791   19524 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 11:07:12.782629   19524 notify.go:220] Checking for updates...
	I0307 11:07:12.824906   19524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 11:07:12.845550   19524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 11:07:12.866581   19524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 11:07:12.889825   19524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	I0307 11:07:12.912753   19524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 11:07:12.934600   19524 config.go:182] Loaded profile config "multinode-813000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 11:07:12.935412   19524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 11:07:12.990477   19524 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 11:07:12.990650   19524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 11:07:13.091439   19524 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:155 SystemTime:2024-03-07 19:07:13.08008192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 11:07:13.113085   19524 out.go:177] * Using the docker driver based on existing profile
	I0307 11:07:13.133873   19524 start.go:297] selected driver: docker
	I0307 11:07:13.133921   19524 start.go:901] validating driver "docker" against &{Name:multinode-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-813000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 11:07:13.134045   19524 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 11:07:13.134312   19524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 11:07:13.237569   19524 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:155 SystemTime:2024-03-07 19:07:13.227604964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 11:07:13.240638   19524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 11:07:13.240710   19524 cni.go:84] Creating CNI manager for ""
	I0307 11:07:13.240719   19524 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 11:07:13.240787   19524 start.go:340] cluster config:
	{Name:multinode-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 11:07:13.283903   19524 out.go:177] * Starting "multinode-813000" primary control-plane node in "multinode-813000" cluster
	I0307 11:07:13.304726   19524 cache.go:121] Beginning downloading kic base image for docker with docker
	I0307 11:07:13.325960   19524 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 11:07:13.368021   19524 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 11:07:13.368064   19524 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 11:07:13.368100   19524 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 11:07:13.368121   19524 cache.go:56] Caching tarball of preloaded images
	I0307 11:07:13.368360   19524 preload.go:173] Found /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 11:07:13.368375   19524 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 11:07:13.368504   19524 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/multinode-813000/config.json ...
	I0307 11:07:13.418659   19524 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 11:07:13.418841   19524 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 11:07:13.418866   19524 cache.go:194] Successfully downloaded all kic artifacts
	I0307 11:07:13.418921   19524 start.go:360] acquireMachinesLock for multinode-813000: {Name:mk29a5ca7eade859f62bd0aa5a200d60c803f23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 11:07:13.419010   19524 start.go:364] duration metric: took 71.108µs to acquireMachinesLock for "multinode-813000"
	I0307 11:07:13.419032   19524 start.go:96] Skipping create...Using existing machine configuration
	I0307 11:07:13.419041   19524 fix.go:54] fixHost starting: 
	I0307 11:07:13.419268   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:13.468340   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:13.468394   19524 fix.go:112] recreateIfNeeded on multinode-813000: state= err=unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:13.468414   19524 fix.go:117] machineExists: false. err=machine does not exist
	I0307 11:07:13.490140   19524 out.go:177] * docker "multinode-813000" container is missing, will recreate.
	I0307 11:07:13.531807   19524 delete.go:124] DEMOLISHING multinode-813000 ...
	I0307 11:07:13.532033   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:13.583421   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	W0307 11:07:13.583478   19524 stop.go:83] unable to get state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:13.583497   19524 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:13.583866   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:13.633180   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:13.633231   19524 delete.go:82] Unable to get host status for multinode-813000, assuming it has already been deleted: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:13.633310   19524 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-813000
	W0307 11:07:13.682603   19524 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-813000 returned with exit code 1
	I0307 11:07:13.682633   19524 kic.go:371] could not find the container multinode-813000 to remove it. will try anyways
	I0307 11:07:13.682704   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:13.732126   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	W0307 11:07:13.732174   19524 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:13.732260   19524 cli_runner.go:164] Run: docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0"
	W0307 11:07:13.781282   19524 cli_runner.go:211] docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0307 11:07:13.781313   19524 oci.go:650] error shutdown multinode-813000: docker exec --privileged -t multinode-813000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:14.781878   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:14.835396   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:14.835458   19524 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:14.835469   19524 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:07:14.835507   19524 retry.go:31] will retry after 470.649708ms: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:15.307068   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:15.356050   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:15.356098   19524 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:15.356109   19524 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:07:15.356131   19524 retry.go:31] will retry after 1.072536103s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:16.430071   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:16.481043   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:16.481086   19524 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:16.481095   19524 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:07:16.481121   19524 retry.go:31] will retry after 1.461334208s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:17.944193   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:17.994654   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:17.994702   19524 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:17.994711   19524 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:07:17.994736   19524 retry.go:31] will retry after 1.631866628s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:19.627242   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:19.679761   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:19.679806   19524 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:19.679818   19524 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:07:19.679842   19524 retry.go:31] will retry after 3.502249883s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:23.184327   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:23.237373   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:23.237419   19524 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:23.237429   19524 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:07:23.237451   19524 retry.go:31] will retry after 2.278170821s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:25.515888   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:25.565818   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:25.565863   19524 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:25.565888   19524 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:07:25.565922   19524 retry.go:31] will retry after 7.775455224s: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:33.343156   19524 cli_runner.go:164] Run: docker container inspect multinode-813000 --format={{.State.Status}}
	W0307 11:07:33.393735   19524 cli_runner.go:211] docker container inspect multinode-813000 --format={{.State.Status}} returned with exit code 1
	I0307 11:07:33.393779   19524 oci.go:662] temporary error verifying shutdown: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	I0307 11:07:33.393790   19524 oci.go:664] temporary error: container multinode-813000 status is  but expect it to be exited
	I0307 11:07:33.393820   19524 oci.go:88] couldn't shut down multinode-813000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000
	 
	I0307 11:07:33.393891   19524 cli_runner.go:164] Run: docker rm -f -v multinode-813000
	I0307 11:07:33.443578   19524 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-813000
	W0307 11:07:33.492208   19524 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-813000 returned with exit code 1
	I0307 11:07:33.492313   19524 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:07:33.542511   19524 cli_runner.go:164] Run: docker network rm multinode-813000
	I0307 11:07:33.650285   19524 fix.go:124] Sleeping 1 second for extra luck!
	I0307 11:07:34.650971   19524 start.go:125] createHost starting for "" (driver="docker")
	I0307 11:07:34.674320   19524 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 11:07:34.674526   19524 start.go:159] libmachine.API.Create for "multinode-813000" (driver="docker")
	I0307 11:07:34.674589   19524 client.go:168] LocalClient.Create starting
	I0307 11:07:34.674759   19524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 11:07:34.674851   19524 main.go:141] libmachine: Decoding PEM data...
	I0307 11:07:34.674881   19524 main.go:141] libmachine: Parsing certificate...
	I0307 11:07:34.674975   19524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 11:07:34.675060   19524 main.go:141] libmachine: Decoding PEM data...
	I0307 11:07:34.675077   19524 main.go:141] libmachine: Parsing certificate...
	I0307 11:07:34.676672   19524 cli_runner.go:164] Run: docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 11:07:34.727434   19524 cli_runner.go:211] docker network inspect multinode-813000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 11:07:34.727515   19524 network_create.go:281] running [docker network inspect multinode-813000] to gather additional debugging logs...
	I0307 11:07:34.727541   19524 cli_runner.go:164] Run: docker network inspect multinode-813000
	W0307 11:07:34.777334   19524 cli_runner.go:211] docker network inspect multinode-813000 returned with exit code 1
	I0307 11:07:34.777363   19524 network_create.go:284] error running [docker network inspect multinode-813000]: docker network inspect multinode-813000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-813000 not found
	I0307 11:07:34.777381   19524 network_create.go:286] output of [docker network inspect multinode-813000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-813000 not found
	
	** /stderr **
	I0307 11:07:34.777513   19524 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:07:34.831501   19524 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:07:34.833250   19524 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:07:34.833746   19524 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021e3ce0}
	I0307 11:07:34.833767   19524 network_create.go:124] attempt to create docker network multinode-813000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0307 11:07:34.833881   19524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-813000 multinode-813000
	I0307 11:07:34.926269   19524 network_create.go:108] docker network multinode-813000 192.168.67.0/24 created
	I0307 11:07:34.926313   19524 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-813000" container
	I0307 11:07:34.926419   19524 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 11:07:34.978125   19524 cli_runner.go:164] Run: docker volume create multinode-813000 --label name.minikube.sigs.k8s.io=multinode-813000 --label created_by.minikube.sigs.k8s.io=true
	I0307 11:07:35.029399   19524 oci.go:103] Successfully created a docker volume multinode-813000
	I0307 11:07:35.029524   19524 cli_runner.go:164] Run: docker run --rm --name multinode-813000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-813000 --entrypoint /usr/bin/test -v multinode-813000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 11:07:35.340257   19524 oci.go:107] Successfully prepared a docker volume multinode-813000
	I0307 11:07:35.340296   19524 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 11:07:35.340308   19524 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 11:07:35.340403   19524 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-813000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-813000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-813000
helpers_test.go:235: (dbg) docker inspect multinode-813000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-813000",
	        "Id": "b7cdc005496187dd6b53a355f490a168cab60bcd51e958ddb348e9a3854c937e",
	        "Created": "2024-03-07T19:07:34.881645917Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-813000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-813000 -n multinode-813000: exit status 7 (114.323051ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:09:01.718992   19712 status.go:249] status error: host: state: unknown state "multinode-813000": docker container inspect multinode-813000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-813000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-813000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (109.09s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-656000 --memory=2048 --driver=docker 
E0307 11:15:33.104749    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:16:35.765370    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-656000 --memory=2048 --driver=docker : signal: killed (5m0.003430795s)

                                                
                                                
-- stdout --
	* [scheduled-stop-656000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-656000" primary control-plane node in "scheduled-stop-656000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-656000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-656000" primary control-plane node in "scheduled-stop-656000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-07 11:17:27.368797 -0800 PST m=+4920.930444956
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-656000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-656000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-656000",
	        "Id": "bafddaef7aa3899e01cb361db8814a214299b26b7d81ca7c68c34f07b036c651",
	        "Created": "2024-03-07T19:12:28.386779929Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-656000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-656000 -n scheduled-stop-656000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-656000 -n scheduled-stop-656000: exit status 7 (114.352231ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:17:27.533951   20450 status.go:249] status error: host: state: unknown state "scheduled-stop-656000": docker container inspect scheduled-stop-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-656000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-656000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-656000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-656000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2603927953 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-279000 --memory=2600 --driver=docker 
E0307 11:17:58.812048    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 11:20:33.110190    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:21:35.771169    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-279000 --memory=2600 --driver=docker : signal: killed (4m53.293856321s)

                                                
                                                
-- stdout --
	* [skaffold-279000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-279000" primary control-plane node in "skaffold-279000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-279000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-279000" primary control-plane node in "skaffold-279000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-03-07 11:22:28.273516 -0800 PST m=+5221.829197024
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-279000
helpers_test.go:235: (dbg) docker inspect skaffold-279000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-279000",
	        "Id": "4c2eebeea99cda8e2e17923d519c5f30ab88d34d1be2429e076c046d4b82358b",
	        "Created": "2024-03-07T19:17:36.06418479Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-279000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-279000 -n skaffold-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-279000 -n skaffold-279000: exit status 7 (114.893548ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:22:28.440173   20723 status.go:249] status error: host: state: unknown state "skaffold-279000": docker container inspect skaffold-279000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-279000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-279000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-279000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-279000
--- FAIL: TestSkaffold (300.91s)

                                                
                                    
x
+
TestInsufficientStorage (300.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-188000 --memory=2048 --output=json --wait=true --driver=docker 
E0307 11:25:33.116293    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:26:35.779102    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-188000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.0053575s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c2c36fc4-c411-4473-aae8-56e849f0b61c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-188000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9390eb7d-a6c7-474e-8ab8-45127eb9ee28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18239"}}
	{"specversion":"1.0","id":"9077bc56-e0c2-493e-bf01-a9db74761d66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig"}}
	{"specversion":"1.0","id":"49074d0b-ee19-435c-99e8-15c1772deae7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"b02d94f4-fcad-4a43-a85e-7a4c25cde67c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a6f95b33-aba8-4b95-aebd-1d0ade63fa90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube"}}
	{"specversion":"1.0","id":"7b690dc7-9e30-44d3-8b33-a88e9090e41c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"defeaa1d-7db9-4909-a82a-8c5a785cb562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"17cd7d30-48c8-4052-90a0-e8a64ad21d1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"91eca244-8d5a-417d-a7ae-cbfe59849132","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbaa8825-3581-41b7-9b84-5e7776819a2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"fb0f05b5-80fc-47b3-a123-7082d556e5ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-188000\" primary control-plane node in \"insufficient-storage-188000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f25b33f-3110-46b4-9a87-6bd460eb7c0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"35a89a0d-c506-46c6-931b-bee9396be577","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-188000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-188000 --output=json --layout=cluster: context deadline exceeded (790ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-188000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-188000
--- FAIL: TestInsufficientStorage (300.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (769.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker 
E0307 11:40:33.135346    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:41:35.795039    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker : exit status 52 (12m34.979734242s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-510000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "kubernetes-upgrade-510000" primary control-plane node in "kubernetes-upgrade-510000" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-510000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 11:40:04.669894   21888 out.go:291] Setting OutFile to fd 1 ...
	I0307 11:40:04.670152   21888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:40:04.670157   21888 out.go:304] Setting ErrFile to fd 2...
	I0307 11:40:04.670161   21888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 11:40:04.670336   21888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 11:40:04.671867   21888 out.go:298] Setting JSON to false
	I0307 11:40:04.694152   21888 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9575,"bootTime":1709830829,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 11:40:04.694239   21888 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 11:40:04.716178   21888 out.go:177] * [kubernetes-upgrade-510000] minikube v1.32.0 on Darwin 14.3.1
	I0307 11:40:04.758055   21888 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 11:40:04.758071   21888 notify.go:220] Checking for updates...
	I0307 11:40:04.801064   21888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 11:40:04.823157   21888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 11:40:04.844849   21888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 11:40:04.866077   21888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	I0307 11:40:04.887292   21888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 11:40:04.909813   21888 config.go:182] Loaded profile config "missing-upgrade-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 11:40:04.909988   21888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 11:40:04.967438   21888 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 11:40:04.967588   21888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 11:40:05.066260   21888 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:108 OomKillDisable:false NGoroutines:235 SystemTime:2024-03-07 19:40:05.05512249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined na
me=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM
) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 11:40:05.109860   21888 out.go:177] * Using the docker driver based on user configuration
	I0307 11:40:05.130646   21888 start.go:297] selected driver: docker
	I0307 11:40:05.130668   21888 start.go:901] validating driver "docker" against <nil>
	I0307 11:40:05.130679   21888 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 11:40:05.134196   21888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 11:40:05.233684   21888 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:108 OomKillDisable:false NGoroutines:235 SystemTime:2024-03-07 19:40:05.223233077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined n
ame=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docke
r Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBO
M) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 11:40:05.233901   21888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 11:40:05.234087   21888 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 11:40:05.255615   21888 out.go:177] * Using Docker Desktop driver with root privileges
	I0307 11:40:05.276700   21888 cni.go:84] Creating CNI manager for ""
	I0307 11:40:05.276747   21888 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 11:40:05.276849   21888 start.go:340] cluster config:
	{Name:kubernetes-upgrade-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 11:40:05.298517   21888 out.go:177] * Starting "kubernetes-upgrade-510000" primary control-plane node in "kubernetes-upgrade-510000" cluster
	I0307 11:40:05.340724   21888 cache.go:121] Beginning downloading kic base image for docker with docker
	I0307 11:40:05.362653   21888 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 11:40:05.404746   21888 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 11:40:05.404833   21888 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0307 11:40:05.404816   21888 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 11:40:05.404845   21888 cache.go:56] Caching tarball of preloaded images
	I0307 11:40:05.404993   21888 preload.go:173] Found /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 11:40:05.405007   21888 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 11:40:05.405107   21888 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/kubernetes-upgrade-510000/config.json ...
	I0307 11:40:05.405155   21888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/kubernetes-upgrade-510000/config.json: {Name:mkee0613502b9cce1c16abc33fc423a3fcbd3333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 11:40:05.456548   21888 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 11:40:05.456569   21888 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 11:40:05.456603   21888 cache.go:194] Successfully downloaded all kic artifacts
	I0307 11:40:05.456640   21888 start.go:360] acquireMachinesLock for kubernetes-upgrade-510000: {Name:mk0b89ef95fd7b4bcfcbc7002e0ccc517221a94e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 11:40:05.456784   21888 start.go:364] duration metric: took 132.549µs to acquireMachinesLock for "kubernetes-upgrade-510000"
	I0307 11:40:05.456811   21888 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-510000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 11:40:05.456976   21888 start.go:125] createHost starting for "" (driver="docker")
	I0307 11:40:05.499593   21888 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 11:40:05.499967   21888 start.go:159] libmachine.API.Create for "kubernetes-upgrade-510000" (driver="docker")
	I0307 11:40:05.500015   21888 client.go:168] LocalClient.Create starting
	I0307 11:40:05.500225   21888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 11:40:05.500317   21888 main.go:141] libmachine: Decoding PEM data...
	I0307 11:40:05.500351   21888 main.go:141] libmachine: Parsing certificate...
	I0307 11:40:05.500458   21888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 11:40:05.500527   21888 main.go:141] libmachine: Decoding PEM data...
	I0307 11:40:05.500543   21888 main.go:141] libmachine: Parsing certificate...
	I0307 11:40:05.501548   21888 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-510000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 11:40:05.574459   21888 cli_runner.go:211] docker network inspect kubernetes-upgrade-510000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 11:40:05.574593   21888 network_create.go:281] running [docker network inspect kubernetes-upgrade-510000] to gather additional debugging logs...
	I0307 11:40:05.574615   21888 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-510000
	W0307 11:40:05.624884   21888 cli_runner.go:211] docker network inspect kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:40:05.624916   21888 network_create.go:284] error running [docker network inspect kubernetes-upgrade-510000]: docker network inspect kubernetes-upgrade-510000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-510000 not found
	I0307 11:40:05.624931   21888 network_create.go:286] output of [docker network inspect kubernetes-upgrade-510000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-510000 not found
	
	** /stderr **
	I0307 11:40:05.625049   21888 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:40:05.677134   21888 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:40:05.678841   21888 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:40:05.679219   21888 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00213fbf0}
	I0307 11:40:05.679235   21888 network_create.go:124] attempt to create docker network kubernetes-upgrade-510000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0307 11:40:05.679305   21888 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000
	W0307 11:40:05.729739   21888 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000 returned with exit code 1
	W0307 11:40:05.729772   21888 network_create.go:149] failed to create docker network kubernetes-upgrade-510000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0307 11:40:05.729793   21888 network_create.go:116] failed to create docker network kubernetes-upgrade-510000 192.168.67.0/24, will retry: subnet is taken
	I0307 11:40:05.731156   21888 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:40:05.731532   21888 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002486410}
	I0307 11:40:05.731543   21888 network_create.go:124] attempt to create docker network kubernetes-upgrade-510000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0307 11:40:05.731618   21888 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000
	W0307 11:40:05.781569   21888 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000 returned with exit code 1
	W0307 11:40:05.781608   21888 network_create.go:149] failed to create docker network kubernetes-upgrade-510000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0307 11:40:05.781628   21888 network_create.go:116] failed to create docker network kubernetes-upgrade-510000 192.168.76.0/24, will retry: subnet is taken
	I0307 11:40:05.783233   21888 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:40:05.783596   21888 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024874c0}
	I0307 11:40:05.783615   21888 network_create.go:124] attempt to create docker network kubernetes-upgrade-510000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0307 11:40:05.783685   21888 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000
	I0307 11:40:05.869704   21888 network_create.go:108] docker network kubernetes-upgrade-510000 192.168.85.0/24 created
	I0307 11:40:05.869746   21888 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-510000" container
	I0307 11:40:05.869859   21888 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 11:40:05.922755   21888 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-510000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 --label created_by.minikube.sigs.k8s.io=true
	I0307 11:40:05.973737   21888 oci.go:103] Successfully created a docker volume kubernetes-upgrade-510000
	I0307 11:40:05.973877   21888 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-510000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 --entrypoint /usr/bin/test -v kubernetes-upgrade-510000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 11:40:06.372783   21888 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-510000
	I0307 11:40:06.372841   21888 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 11:40:06.372871   21888 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 11:40:06.372987   21888 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-510000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 11:46:05.561429   21888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:46:05.561649   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:46:05.676843   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:05.676965   21888 retry.go:31] will retry after 244.009569ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:05.923228   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:46:05.973347   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:05.973442   21888 retry.go:31] will retry after 294.937875ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:06.270833   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:46:06.321481   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:06.321598   21888 retry.go:31] will retry after 322.252922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:06.644510   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:46:06.694857   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	W0307 11:46:06.694987   21888 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	
	W0307 11:46:06.695013   21888 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:06.695078   21888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:46:06.695157   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:46:06.745291   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:06.745383   21888 retry.go:31] will retry after 299.710159ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:07.047460   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:46:07.097331   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:07.097437   21888 retry.go:31] will retry after 557.701754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:07.656380   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:46:07.707131   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:07.707220   21888 retry.go:31] will retry after 561.720869ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:08.271264   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:46:08.322820   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	W0307 11:46:08.322937   21888 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	
	W0307 11:46:08.322961   21888 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:08.322975   21888 start.go:128] duration metric: took 6m2.80718273s to createHost
	I0307 11:46:08.322982   21888 start.go:83] releasing machines lock for "kubernetes-upgrade-510000", held for 6m2.807388413s
	W0307 11:46:08.322998   21888 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0307 11:46:08.323421   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:08.373348   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:08.373405   21888 delete.go:82] Unable to get host status for kubernetes-upgrade-510000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	W0307 11:46:08.373483   21888 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0307 11:46:08.373494   21888 start.go:728] Will try again in 5 seconds ...
	I0307 11:46:13.375805   21888 start.go:360] acquireMachinesLock for kubernetes-upgrade-510000: {Name:mk0b89ef95fd7b4bcfcbc7002e0ccc517221a94e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 11:46:13.376043   21888 start.go:364] duration metric: took 165.072µs to acquireMachinesLock for "kubernetes-upgrade-510000"
	I0307 11:46:13.376076   21888 start.go:96] Skipping create...Using existing machine configuration
	I0307 11:46:13.376092   21888 fix.go:54] fixHost starting: 
	I0307 11:46:13.376529   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:13.429855   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:13.429907   21888 fix.go:112] recreateIfNeeded on kubernetes-upgrade-510000: state= err=unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:13.429926   21888 fix.go:117] machineExists: false. err=machine does not exist
	I0307 11:46:13.451493   21888 out.go:177] * docker "kubernetes-upgrade-510000" container is missing, will recreate.
	I0307 11:46:13.495539   21888 delete.go:124] DEMOLISHING kubernetes-upgrade-510000 ...
	I0307 11:46:13.495730   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:13.547417   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	W0307 11:46:13.547471   21888 stop.go:83] unable to get state: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:13.547486   21888 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:13.547848   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:13.601958   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:13.602022   21888 delete.go:82] Unable to get host status for kubernetes-upgrade-510000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:13.602099   21888 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-510000
	W0307 11:46:13.651616   21888 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:13.651659   21888 kic.go:371] could not find the container kubernetes-upgrade-510000 to remove it. will try anyways
	I0307 11:46:13.651740   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:13.701522   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	W0307 11:46:13.701568   21888 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:13.701649   21888 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-510000 /bin/bash -c "sudo init 0"
	W0307 11:46:13.751018   21888 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-510000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0307 11:46:13.751053   21888 oci.go:650] error shutdown kubernetes-upgrade-510000: docker exec --privileged -t kubernetes-upgrade-510000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:14.751951   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:14.824188   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:14.824256   21888 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:14.824275   21888 oci.go:664] temporary error: container kubernetes-upgrade-510000 status is  but expect it to be exited
	I0307 11:46:14.824307   21888 retry.go:31] will retry after 749.641609ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:15.574300   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:15.683041   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:15.683107   21888 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:15.683140   21888 oci.go:664] temporary error: container kubernetes-upgrade-510000 status is  but expect it to be exited
	I0307 11:46:15.683168   21888 retry.go:31] will retry after 416.395613ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:16.099846   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:16.151862   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:16.151912   21888 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:16.151921   21888 oci.go:664] temporary error: container kubernetes-upgrade-510000 status is  but expect it to be exited
	I0307 11:46:16.151946   21888 retry.go:31] will retry after 802.599802ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:16.955311   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:17.007709   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:17.007779   21888 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:17.007796   21888 oci.go:664] temporary error: container kubernetes-upgrade-510000 status is  but expect it to be exited
	I0307 11:46:17.007820   21888 retry.go:31] will retry after 1.54993843s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:18.559385   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:18.610716   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:18.610769   21888 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:18.610778   21888 oci.go:664] temporary error: container kubernetes-upgrade-510000 status is  but expect it to be exited
	I0307 11:46:18.610806   21888 retry.go:31] will retry after 2.575434611s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:21.188634   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:21.240085   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:21.240140   21888 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:21.240151   21888 oci.go:664] temporary error: container kubernetes-upgrade-510000 status is  but expect it to be exited
	I0307 11:46:21.240177   21888 retry.go:31] will retry after 4.140449494s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:25.381026   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:25.432386   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:25.432434   21888 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:25.432443   21888 oci.go:664] temporary error: container kubernetes-upgrade-510000 status is  but expect it to be exited
	I0307 11:46:25.432468   21888 retry.go:31] will retry after 6.566863755s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:32.001084   21888 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}
	W0307 11:46:32.054371   21888 cli_runner.go:211] docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}} returned with exit code 1
	I0307 11:46:32.054420   21888 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:46:32.054428   21888 oci.go:664] temporary error: container kubernetes-upgrade-510000 status is  but expect it to be exited
	I0307 11:46:32.054461   21888 oci.go:88] couldn't shut down kubernetes-upgrade-510000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	 
	I0307 11:46:32.054528   21888 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-510000
	I0307 11:46:32.104473   21888 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-510000
	W0307 11:46:32.154540   21888 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:32.154653   21888 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-510000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:46:32.204660   21888 cli_runner.go:164] Run: docker network rm kubernetes-upgrade-510000
	I0307 11:46:32.316323   21888 fix.go:124] Sleeping 1 second for extra luck!
	I0307 11:46:33.316966   21888 start.go:125] createHost starting for "" (driver="docker")
	I0307 11:46:33.338038   21888 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0307 11:46:33.338250   21888 start.go:159] libmachine.API.Create for "kubernetes-upgrade-510000" (driver="docker")
	I0307 11:46:33.338278   21888 client.go:168] LocalClient.Create starting
	I0307 11:46:33.338490   21888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/ca.pem
	I0307 11:46:33.338587   21888 main.go:141] libmachine: Decoding PEM data...
	I0307 11:46:33.338612   21888 main.go:141] libmachine: Parsing certificate...
	I0307 11:46:33.338713   21888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18239-8734/.minikube/certs/cert.pem
	I0307 11:46:33.338783   21888 main.go:141] libmachine: Decoding PEM data...
	I0307 11:46:33.338797   21888 main.go:141] libmachine: Parsing certificate...
	I0307 11:46:33.360669   21888 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-510000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 11:46:33.411207   21888 cli_runner.go:211] docker network inspect kubernetes-upgrade-510000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 11:46:33.411298   21888 network_create.go:281] running [docker network inspect kubernetes-upgrade-510000] to gather additional debugging logs...
	I0307 11:46:33.411318   21888 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-510000
	W0307 11:46:33.461480   21888 cli_runner.go:211] docker network inspect kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:46:33.461514   21888 network_create.go:284] error running [docker network inspect kubernetes-upgrade-510000]: docker network inspect kubernetes-upgrade-510000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-510000 not found
	I0307 11:46:33.461528   21888 network_create.go:286] output of [docker network inspect kubernetes-upgrade-510000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-510000 not found
	
	** /stderr **
	I0307 11:46:33.461656   21888 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 11:46:33.513403   21888 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:46:33.514791   21888 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:46:33.516309   21888 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:46:33.517611   21888 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:46:33.519163   21888 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0307 11:46:33.519498   21888 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f27e0}
	I0307 11:46:33.519516   21888 network_create.go:124] attempt to create docker network kubernetes-upgrade-510000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0307 11:46:33.519591   21888 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 kubernetes-upgrade-510000
	I0307 11:46:33.605690   21888 network_create.go:108] docker network kubernetes-upgrade-510000 192.168.94.0/24 created
	I0307 11:46:33.605731   21888 kic.go:121] calculated static IP "192.168.94.2" for the "kubernetes-upgrade-510000" container
	I0307 11:46:33.605850   21888 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 11:46:33.658640   21888 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-510000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 --label created_by.minikube.sigs.k8s.io=true
	I0307 11:46:33.708555   21888 oci.go:103] Successfully created a docker volume kubernetes-upgrade-510000
	I0307 11:46:33.708678   21888 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-510000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-510000 --entrypoint /usr/bin/test -v kubernetes-upgrade-510000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 11:46:34.003365   21888 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-510000
	I0307 11:46:34.003402   21888 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 11:46:34.003417   21888 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 11:46:34.003541   21888 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-510000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 11:52:33.346692   21888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:52:33.346845   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:33.399428   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:33.399551   21888 retry.go:31] will retry after 172.545378ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:33.572495   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:33.625734   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:33.625851   21888 retry.go:31] will retry after 451.728574ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:34.078885   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:34.129021   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:34.129125   21888 retry.go:31] will retry after 565.479451ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:34.694945   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:34.765103   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	W0307 11:52:34.765203   21888 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	
	W0307 11:52:34.765224   21888 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:34.765286   21888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:52:34.765341   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:34.815836   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:34.815949   21888 retry.go:31] will retry after 142.629701ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:34.959036   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:35.010706   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:35.010808   21888 retry.go:31] will retry after 487.106691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:35.499223   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:35.549545   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:35.549643   21888 retry.go:31] will retry after 463.6643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:36.013642   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:36.063792   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	W0307 11:52:36.063897   21888 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	
	W0307 11:52:36.063915   21888 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:36.063928   21888 start.go:128] duration metric: took 6m2.739832317s to createHost
	I0307 11:52:36.063991   21888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 11:52:36.064047   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:36.113766   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:36.113862   21888 retry.go:31] will retry after 125.240911ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:36.241464   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:36.292364   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:36.292462   21888 retry.go:31] will retry after 478.364913ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:36.772950   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:36.823183   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:36.823282   21888 retry.go:31] will retry after 317.531181ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:37.141732   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:37.193091   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:37.193189   21888 retry.go:31] will retry after 647.72195ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:37.841847   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:37.893288   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	W0307 11:52:37.893401   21888 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	
	W0307 11:52:37.893423   21888 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:37.893481   21888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 11:52:37.893536   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:37.943440   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:37.943546   21888 retry.go:31] will retry after 218.392389ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:38.163015   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:38.216820   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:38.216914   21888 retry.go:31] will retry after 406.833084ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:38.624545   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:38.677054   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	I0307 11:52:38.677154   21888 retry.go:31] will retry after 788.876897ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:39.467021   21888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000
	W0307 11:52:39.520736   21888 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000 returned with exit code 1
	W0307 11:52:39.520837   21888 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	
	W0307 11:52:39.520852   21888 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-510000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-510000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	I0307 11:52:39.520861   21888 fix.go:56] duration metric: took 6m26.137205171s for fixHost
	I0307 11:52:39.520869   21888 start.go:83] releasing machines lock for "kubernetes-upgrade-510000", held for 6m26.137249447s
	W0307 11:52:39.520969   21888 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-510000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-510000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0307 11:52:39.564662   21888 out.go:177] 
	W0307 11:52:39.586457   21888 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0307 11:52:39.586517   21888 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0307 11:52:39.586542   21888 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0307 11:52:39.608688   21888 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-510000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker : exit status 52
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-510000
version_upgrade_test.go:227: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-510000: exit status 82 (13.352678577s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-510000"  ...
	* Stopping node "kubernetes-upgrade-510000"  ...
	* Stopping node "kubernetes-upgrade-510000"  ...
	* Stopping node "kubernetes-upgrade-510000"  ...
	* Stopping node "kubernetes-upgrade-510000"  ...
	* Stopping node "kubernetes-upgrade-510000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect kubernetes-upgrade-510000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:229: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-510000 failed: exit status 82
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-07 11:52:53.019579 -0800 PST m=+7046.487565652
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-510000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-510000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "kubernetes-upgrade-510000",
	        "Id": "5c2eb693f83b2e29a9e4b4ec2312ccef50abc3c14dfffadd522d1cef4e8fe867",
	        "Created": "2024-03-07T19:46:33.565425892Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "kubernetes-upgrade-510000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-510000 -n kubernetes-upgrade-510000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-510000 -n kubernetes-upgrade-510000: exit status 7 (114.635137ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 11:52:53.185599   22690 status.go:249] status error: host: state: unknown state "kubernetes-upgrade-510000": docker container inspect kubernetes-upgrade-510000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-510000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-510000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-510000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-510000
--- FAIL: TestKubernetesUpgrade (769.25s)

                                                
                                    
x
+
TestMissingContainerUpgrade (7200.689s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3989123546 start -p missing-upgrade-032000 --memory=2200 --driver=docker 
E0307 11:28:36.171332    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:30:33.122083    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:31:35.784847    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 11:34:38.833133    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 11:35:33.128062    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:36:35.789248    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3989123546 start -p missing-upgrade-032000 --memory=2200 --driver=docker : exit status 52 (14m28.95106256s)

                                                
                                                
-- stdout --
	* [missing-upgrade-032000] minikube v1.26.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node missing-upgrade-032000 in cluster missing-upgrade-032000
	* Pulling base image ...
	* minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Downloading Kubernetes v1.24.1 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-032000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB [>____] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB [>____] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB  0.01% 45.51 KiB p/s     > gcr.io/k8s-minikube/kicbase: 58.77 KiB / 386.00 MiB  0.01% 45.51 KiB p/s     > gcr.io/k8s-minikube/kicbase: 138.76 KiB / 386.00 MiB  0.04% 45.51 KiB p/s    > gcr.io/k8s-minikube/kicbase: 346.76 KiB / 386.00 MiB  0.09% 76.94 KiB p/s    > gcr.io/k8s-minikube/kicbase: 794.76 KiB / 386.00 MiB  0.20% 76.94 KiB p/s    > gcr.io/k8s-minikube/kicbase: 1.62 MiB / 386.00 MiB  0.42% 76.94 KiB p/s E    > gcr.io/k8s-minikube/kicbase: 3.79 MiB / 386.00 MiB  0.98% 452
.36 KiB p/s     > gcr.io/k8s-minikube/kicbase: 8.17 MiB / 386.00 MiB  2.12% 452.36 KiB p/s     > gcr.io/k8s-minikube/kicbase: 11.82 MiB / 386.00 MiB  3.06% 452.36 KiB p/s    > gcr.io/k8s-minikube/kicbase: 15.39 MiB / 386.00 MiB  3.99% 1.66 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 17.79 MiB / 386.00 MiB  4.61% 1.66 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 21.89 MiB / 386.00 MiB  5.67% 1.66 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 25.89 MiB / 386.00 MiB  6.71% 2.68 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.68 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.68 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.66 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.66 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.66 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.49 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07%
2.49 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.49 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.33 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.33 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.33 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.18 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.18 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.18 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.04 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.04 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.04 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 1.91 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 1.91 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.30 MiB / 386.00 MiB  7.
07% 1.91 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.36 MiB / 386.00 MiB  7.09% 1.79 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.60 MiB / 386.00 MiB  7.15% 1.79 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 28.50 MiB / 386.00 MiB  7.38% 1.79 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 32.25 MiB / 386.00 MiB  8.36% 2.20 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 37.36 MiB / 386.00 MiB  9.68% 2.20 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 40.43 MiB / 386.00 MiB  10.47% 2.20 MiB p/s     > gcr.io/k8s-minikube/kicbase: 43.64 MiB / 386.00 MiB  11.31% 3.28 MiB p/s     > gcr.io/k8s-minikube/kicbase: 48.36 MiB / 386.00 MiB  12.53% 3.28 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.09 MiB / 386.00 MiB  13.50% 3.28 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.09 MiB / 386.00 MiB  13.50% 3.98 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.09 MiB / 386.00 MiB  13.50% 3.98 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.14 MiB / 386.00 MiB  13.51% 3.98 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.27 MiB / 386.00 MiB
13.54% 3.74 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.74 MiB / 386.00 MiB  13.66% 3.74 MiB p/s     > gcr.io/k8s-minikube/kicbase: 54.69 MiB / 386.00 MiB  14.17% 3.74 MiB p/s     > gcr.io/k8s-minikube/kicbase: 59.17 MiB / 386.00 MiB  15.33% 4.24 MiB p/s     > gcr.io/k8s-minikube/kicbase: 63.88 MiB / 386.00 MiB  16.55% 4.24 MiB p/s     > gcr.io/k8s-minikube/kicbase: 68.12 MiB / 386.00 MiB  17.65% 4.24 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.66 MiB / 386.00 MiB  18.57% 5.31 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.66 MiB / 386.00 MiB  18.57% 5.31 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.66 MiB / 386.00 MiB  18.57% 5.31 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.71 MiB / 386.00 MiB  18.58% 4.97 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.82 MiB / 386.00 MiB  18.61% 4.97 MiB p/s     > gcr.io/k8s-minikube/kicbase: 72.23 MiB / 386.00 MiB  18.71% 4.97 MiB p/s     > gcr.io/k8s-minikube/kicbase: 73.79 MiB / 386.00 MiB  19.12% 4.88 MiB p/s     > gcr.io/k8s-minikube/kicbase: 78.48 MiB / 386.00 M
iB  20.33% 4.88 MiB p/s     > gcr.io/k8s-minikube/kicbase: 83.82 MiB / 386.00 MiB  21.71% 4.88 MiB p/s     > gcr.io/k8s-minikube/kicbase: 89.30 MiB / 386.00 MiB  23.14% 6.23 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 6.23 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 6.23 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 6.32 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 6.32 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.98 MiB / 386.00 MiB  24.35% 6.32 MiB p/s     > gcr.io/k8s-minikube/kicbase: 94.15 MiB / 386.00 MiB  24.39% 5.94 MiB p/s     > gcr.io/k8s-minikube/kicbase: 94.81 MiB / 386.00 MiB  24.56% 5.94 MiB p/s     > gcr.io/k8s-minikube/kicbase: 97.63 MiB / 386.00 MiB  25.29% 5.94 MiB p/s     > gcr.io/k8s-minikube/kicbase: 103.05 MiB / 386.00 MiB  26.70% 6.52 MiB p/s    > gcr.io/k8s-minikube/kicbase: 108.20 MiB / 386.00 MiB  28.03% 6.52 MiB p/s    > gcr.io/k8s-minikube/kicbase: 113.29 MiB / 386.
00 MiB  29.35% 6.52 MiB p/s    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 7.70 MiB p/s    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 7.70 MiB p/s    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 7.70 MiB p/s    > gcr.io/k8s-minikube/kicbase: 118.03 MiB / 386.00 MiB  30.58% 7.21 MiB p/s    > gcr.io/k8s-minikube/kicbase: 118.12 MiB / 386.00 MiB  30.60% 7.21 MiB p/s    > gcr.io/k8s-minikube/kicbase: 118.46 MiB / 386.00 MiB  30.69% 7.21 MiB p/s    > gcr.io/k8s-minikube/kicbase: 119.81 MiB / 386.00 MiB  31.04% 6.94 MiB p/s    > gcr.io/k8s-minikube/kicbase: 124.51 MiB / 386.00 MiB  32.26% 6.94 MiB p/s    > gcr.io/k8s-minikube/kicbase: 129.56 MiB / 386.00 MiB  33.56% 6.94 MiB p/s    > gcr.io/k8s-minikube/kicbase: 134.93 MiB / 386.00 MiB  34.96% 8.11 MiB p/s    > gcr.io/k8s-minikube/kicbase: 139.68 MiB / 386.00 MiB  36.19% 8.11 MiB p/s    > gcr.io/k8s-minikube/kicbase: 144.62 MiB / 386.00 MiB  37.47% 8.11 MiB p/s    > gcr.io/k8s-minikube/kicbase: 150.25 MiB / 3
86.00 MiB  38.92% 9.24 MiB p/s    > gcr.io/k8s-minikube/kicbase: 155.25 MiB / 386.00 MiB  40.22% 9.24 MiB p/s    > gcr.io/k8s-minikube/kicbase: 160.15 MiB / 386.00 MiB  41.49% 9.24 MiB p/s    > gcr.io/k8s-minikube/kicbase: 165.78 MiB / 386.00 MiB  42.95% 10.31 MiB p/    > gcr.io/k8s-minikube/kicbase: 171.09 MiB / 386.00 MiB  44.32% 10.31 MiB p/    > gcr.io/k8s-minikube/kicbase: 175.78 MiB / 386.00 MiB  45.54% 10.31 MiB p/    > gcr.io/k8s-minikube/kicbase: 181.34 MiB / 386.00 MiB  46.98% 11.32 MiB p/    > gcr.io/k8s-minikube/kicbase: 186.96 MiB / 386.00 MiB  48.44% 11.32 MiB p/    > gcr.io/k8s-minikube/kicbase: 191.53 MiB / 386.00 MiB  49.62% 11.32 MiB p/    > gcr.io/k8s-minikube/kicbase: 196.90 MiB / 386.00 MiB  51.01% 12.26 MiB p/    > gcr.io/k8s-minikube/kicbase: 200.87 MiB / 386.00 MiB  52.04% 12.26 MiB p/    > gcr.io/k8s-minikube/kicbase: 203.59 MiB / 386.00 MiB  52.74% 12.26 MiB p/    > gcr.io/k8s-minikube/kicbase: 208.37 MiB / 386.00 MiB  53.98% 12.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 213.40 MiB
/ 386.00 MiB  55.29% 12.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 218.28 MiB / 386.00 MiB  56.55% 12.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 223.03 MiB / 386.00 MiB  57.78% 13.46 MiB p/    > gcr.io/k8s-minikube/kicbase: 227.81 MiB / 386.00 MiB  59.02% 13.46 MiB p/    > gcr.io/k8s-minikube/kicbase: 232.90 MiB / 386.00 MiB  60.34% 13.46 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.41 MiB / 386.00 MiB  60.99% 13.92 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.41 MiB / 386.00 MiB  60.99% 13.92 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.41 MiB / 386.00 MiB  60.99% 13.92 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.45 MiB / 386.00 MiB  61.00% 13.03 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.59 MiB / 386.00 MiB  61.03% 13.03 MiB p/    > gcr.io/k8s-minikube/kicbase: 236.03 MiB / 386.00 MiB  61.15% 13.03 MiB p/    > gcr.io/k8s-minikube/kicbase: 237.84 MiB / 386.00 MiB  61.62% 12.45 MiB p/    > gcr.io/k8s-minikube/kicbase: 242.84 MiB / 386.00 MiB  62.91% 12.45 MiB p/    > gcr.io/k8s-minikube/kicbase: 248.20 M
iB / 386.00 MiB  64.30% 12.45 MiB p/    > gcr.io/k8s-minikube/kicbase: 253.56 MiB / 386.00 MiB  65.69% 13.33 MiB p/    > gcr.io/k8s-minikube/kicbase: 258.47 MiB / 386.00 MiB  66.96% 13.33 MiB p/    > gcr.io/k8s-minikube/kicbase: 263.78 MiB / 386.00 MiB  68.34% 13.33 MiB p/    > gcr.io/k8s-minikube/kicbase: 269.34 MiB / 386.00 MiB  69.78% 14.17 MiB p/    > gcr.io/k8s-minikube/kicbase: 274.16 MiB / 386.00 MiB  71.02% 14.17 MiB p/    > gcr.io/k8s-minikube/kicbase: 279.33 MiB / 386.00 MiB  72.36% 14.17 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.29 MiB / 386.00 MiB  72.87% 14.54 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.29 MiB / 386.00 MiB  72.87% 14.54 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.29 MiB / 386.00 MiB  72.87% 14.54 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.37 MiB / 386.00 MiB  72.89% 13.61 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.57 MiB / 386.00 MiB  72.95% 13.61 MiB p/    > gcr.io/k8s-minikube/kicbase: 282.34 MiB / 386.00 MiB  73.14% 13.61 MiB p/    > gcr.io/k8s-minikube/kicbase: 285.6
2 MiB / 386.00 MiB  73.99% 13.19 MiB p/    > gcr.io/k8s-minikube/kicbase: 291.22 MiB / 386.00 MiB  75.45% 13.19 MiB p/    > gcr.io/k8s-minikube/kicbase: 296.07 MiB / 386.00 MiB  76.70% 13.19 MiB p/    > gcr.io/k8s-minikube/kicbase: 301.20 MiB / 386.00 MiB  78.03% 14.01 MiB p/    > gcr.io/k8s-minikube/kicbase: 305.29 MiB / 386.00 MiB  79.09% 14.01 MiB p/    > gcr.io/k8s-minikube/kicbase: 310.59 MiB / 386.00 MiB  80.46% 14.01 MiB p/    > gcr.io/k8s-minikube/kicbase: 316.24 MiB / 386.00 MiB  81.93% 14.73 MiB p/    > gcr.io/k8s-minikube/kicbase: 321.16 MiB / 386.00 MiB  83.20% 14.73 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 14.73 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 14.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 14.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.37 MiB / 386.00 MiB  84.29% 14.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.42 MiB / 386.00 MiB  84.31% 13.81 MiB p/    > gcr.io/k8s-minikube/kicbase: 32
5.68 MiB / 386.00 MiB  84.37% 13.81 MiB p/    > gcr.io/k8s-minikube/kicbase: 326.65 MiB / 386.00 MiB  84.62% 13.81 MiB p/    > gcr.io/k8s-minikube/kicbase: 330.76 MiB / 386.00 MiB  85.69% 13.50 MiB p/    > gcr.io/k8s-minikube/kicbase: 335.83 MiB / 386.00 MiB  87.00% 13.50 MiB p/    > gcr.io/k8s-minikube/kicbase: 340.89 MiB / 386.00 MiB  88.31% 13.50 MiB p/    > gcr.io/k8s-minikube/kicbase: 346.32 MiB / 386.00 MiB  89.72% 14.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 351.56 MiB / 386.00 MiB  91.08% 14.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 356.53 MiB / 386.00 MiB  92.36% 14.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.56 MiB / 386.00 MiB  92.89% 14.69 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.56 MiB / 386.00 MiB  92.89% 14.69 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.56 MiB / 386.00 MiB  92.89% 14.69 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.64 MiB / 386.00 MiB  92.91% 13.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.84 MiB / 386.00 MiB  92.96% 13.75 MiB p/    > gcr.io/k8s-minikube/kicbase:
359.63 MiB / 386.00 MiB  93.17% 13.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 363.01 MiB / 386.00 MiB  94.05% 13.34 MiB p/    > gcr.io/k8s-minikube/kicbase: 367.91 MiB / 386.00 MiB  95.31% 13.34 MiB p/    > gcr.io/k8s-minikube/kicbase: 371.97 MiB / 386.00 MiB  96.37% 13.34 MiB p/    > gcr.io/k8s-minikube/kicbase: 376.76 MiB / 386.00 MiB  97.61% 13.95 MiB p/    > gcr.io/k8s-minikube/kicbase: 381.85 MiB / 386.00 MiB  98.92% 13.95 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 13.95 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 14.04 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 14.04 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 14.04 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 13.14 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 13.14 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 13.14 MiB p/    > gcr.io/k8s-minikube/kicba
se: 385.96 MiB / 386.00 MiB  99.99% 12.29 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 12.29 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 12.29 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 11.50 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 11.50 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 11.50 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 10.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 10.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 10.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 10.06 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 10.06 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 10.06 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 9.41 MiB p/s    > gcr.io/k8s-minikube/ki
cbase: 385.97 MiB / 386.00 MiB  99.99% 9.41 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 9.41 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.80 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.80 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.80 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.24 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.24 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.24 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.71 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.71 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.71 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.21 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.21 MiB p/s    > gcr.io/k8s-minikube
/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.21 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.74 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.74 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.74 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.31 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.31 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.31 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.90 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.90 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.90 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.52 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.52 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.52 MiB p/s    > gcr.io/k8s-minik
ube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.17 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.17 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 5.17 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.83 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.83 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.83 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.23 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.23 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.23 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.96 MiB p/    > gcr.io/k8s-mi
nikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.96 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.96 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.46 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.46 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.46 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.24 MiB p/    > gcr.io/k8s-minikube/kicbase: 386.00 MiB / 386.00 MiB  100.00% 8.46 MiB p/    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s
-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/
k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.
io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > g
cr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?
> gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?
> gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s
?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ?
p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?%
? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________]
?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_________________________
__] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [______________________
_____] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________
________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [________________
___________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_____________
______________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [__________
_________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_______
____________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [____
_______________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_________________________] ?% ? p/s 46s! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-032000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
version_upgrade_test.go:309: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3989123546 start -p missing-upgrade-032000 --memory=2200 --driver=docker 
E0307 11:45:16.193461    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:45:33.139975    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:46:35.853749    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 11:50:33.198733    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 11:51:18.906753    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 11:51:35.858370    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3989123546 start -p missing-upgrade-032000 --memory=2200 --driver=docker : exit status 52 (12m53.256399124s)

                                                
                                                
-- stdout --
	* [missing-upgrade-032000] minikube v1.26.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-032000 in cluster missing-upgrade-032000
	* Pulling base image ...
	* docker "missing-upgrade-032000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-032000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-032000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
version_upgrade_test.go:309: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3989123546 start -p missing-upgrade-032000 --memory=2200 --driver=docker 
panic: test timed out after 2h0m0s
running tests:
	TestMissingContainerUpgrade (27m50s)
	TestNetworkPlugins (27m57s)
	TestStoppedBinaryUpgrade (2m33s)
	TestStoppedBinaryUpgrade/Upgrade (2m28s)

                                                
                                                
goroutine 2493 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0002356c0, 0xc0012dfbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000714330, {0xaaa0e20, 0x2a, 0x2a}, {0x6759ba5?, 0x81ec43e?, 0xaac3140?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000a68640)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000a68640)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006cce00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 608 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002008820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002008820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc002008820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc002008820, 0x978e0a8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 23 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 22
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2490 [IO wait]:
internal/poll.runtime_pollWait(0x52379220, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0022a8240?, 0xc00071b285?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0022a8240, {0xc00071b285, 0x57b, 0x57b})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002344058, {0xc00071b285?, 0xc000625a40?, 0x206?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021e8090, {0x9798c48, 0xc0020ce030})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x9798d88, 0xc0021e8090}, {0x9798c48, 0xc0020ce030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00243d678?, {0x9798d88, 0xc0021e8090})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00243d738?, {0x9798d88?, 0xc0021e8090?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x9798d88, 0xc0021e8090}, {0x9798d08, 0xc002344058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0029126c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2339
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2353 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020096c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020096c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0020096c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0020096c0, 0xc0028fa200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1320 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc00221a000, 0xc000067680)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1319
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2335 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002009040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002009040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002009040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002009040, 0xc0028fa000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2356 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002009ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002009ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002009ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002009ba0, 0xc0028fa400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 606 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002008000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002008000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc002008000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc002008000, 0x978e098)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2355 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002009a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002009a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002009a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002009a00, 0xc0028fa300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 685 [IO wait, 112 minutes]:
internal/poll.runtime_pollWait(0x523798e8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0028fa380?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0028fa380)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0028fa380)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0009f82c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0009f82c0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0008fe0f0, {0x97b04e0, 0xc0009f82c0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0008fe0f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00211b040?, 0xc00211b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 682
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2486 [syscall, 2 minutes]:
syscall.syscall6(0xc0021e9f80?, 0x1000000000010?, 0x1000000004c?, 0x52315558?, 0x90?, 0xb3a6108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00242d740?, 0x669a165?, 0x90?, 0x96fe0a0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x67cae85?, 0xc00242d774, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc00239e5a0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0020582c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0020582c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a604e0, 0xc0020582c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x37e
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc00242dc20?, {0x97a6330, 0xc0024ac580}, 0x978f130, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x818915f?, {0x97a6330?, 0xc0024ac580?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc001ffbe28, 0x3b9aca00, 0x1a3185c5000, {0xc001ffbd08?, 0x92aa1e0?, 0x66e63c8?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xeb
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc000a604e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc000a604e0, 0xc002b70340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2337
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2336 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020091e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020091e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0020091e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0020091e0, 0xc0028fa080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2492 [select]:
os/exec.(*Cmd).watchCtx(0xc002058000, 0xc000874000)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2339
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2263 [chan receive, 28 minutes]:
testing.(*T).Run(0xc002b4a340, {0x81940a7?, 0x703ec3925d6?}, 0xc0021d6198)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002b4a340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002b4a340, 0x978e178)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 613 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002009380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002009380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperKitDriverInstallOrUpdate(0xc002009380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:108 +0x39
testing.tRunner(0xc002009380, 0x978e0f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 205 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002276f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 206 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a74a00, 0xc0006b4900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2325 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002b4a1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002b4a1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc002b4a1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc002b4a1a0, 0x978e1c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 917 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 916
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 209 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000a749d0, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x92aa1e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002276c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a74a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00241e620, {0x979a240, 0xc0022f01b0}, 0x1, 0xc0006b4900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00241e620, 0x3b9aca00, 0x0, 0x1, 0xc0006b4900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 206
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 210 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x97bcc40, 0xc0006b4900}, 0xc000507f50, 0xc002206f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x97bcc40, 0xc0006b4900}, 0xc0?, 0xc000507f50, 0xc000507f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x97bcc40?, 0xc0006b4900?}, 0x94b9160?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000507fd0?, 0x6813e44?, 0xc000066cc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 206
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 211 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 210
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2487 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x52379508, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0022a8c00?, 0xc0022a2285?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0022a8c00, {0xc0022a2285, 0x57b, 0x57b})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002344130, {0xc0022a2285?, 0xc002a04210?, 0x234?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021e8210, {0x9798c48, 0xc0020ce318})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x9798d88, 0xc0021e8210}, {0x9798c48, 0xc0020ce318}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xa9d66c0?, {0x9798d88, 0xc0021e8210})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x9798d88?, 0xc0021e8210?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x9798d88, 0xc0021e8210}, {0x9798d08, 0xc002344130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b70340?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2486
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 614 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002009520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002009520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperkitDriverSkipUpgrade(0xc002009520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:172 +0x2a
testing.tRunner(0xc002009520, 0x978e0f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2339 [syscall]:
syscall.syscall6(0xc0021e9f80?, 0x1000000000010?, 0x1000000004c?, 0x52315558?, 0x90?, 0xb3a6108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00242b758?, 0x669a165?, 0x90?, 0x96fe0a0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x67cae85?, 0xc00242b78c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002cd0030)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002058000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002058000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002b4b380, 0xc002058000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestMissingContainerUpgrade.func1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:309 +0x66
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc00242bba0?, {0x97a6330, 0xc0009f9860}, 0x978f130, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x912b748?, {0x97a6330?, 0xc0009f9860?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc00242bd10, 0x3b9aca00, 0x1a3185c5000, {0xc00242bc70?, 0x92aa1e0?, 0xa5c?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xeb
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc002b4b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:314 +0x54e
testing.tRunner(0xc002b4b380, 0x978e158)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2358 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000235040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000235040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000235040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000235040, 0xc0028fa500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 610 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002008ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002008ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc002008ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc002008ea0, 0x978e0d0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 609 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002008d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002008d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc002008d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc002008d00, 0x978e0d8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1460 [select, 106 minutes]:
net/http.(*persistConn).readLoop(0xc0024d10e0)
	/usr/local/go/src/net/http/transport.go:2260 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1475
	/usr/local/go/src/net/http/transport.go:1798 +0x152f

                                                
                                                
goroutine 607 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002008680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002008680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc002008680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc002008680, 0x978e090)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2488 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x52379318, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0022a8cc0?, 0xc0009ab600?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0022a8cc0, {0xc0009ab600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002344148, {0xc0009ab600?, 0xc00245bdf0?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021e8240, {0x9798c48, 0xc0020ce328})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x9798d88, 0xc0021e8240}, {0x9798c48, 0xc0020ce328}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x97bca10?, {0x9798d88, 0xc0021e8240})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x66fd55167ff?, {0x9798d88?, 0xc0021e8240?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x9798d88, 0xc0021e8240}, {0x9798d08, 0xc002344148}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00223e001?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2486
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 907 [chan receive, 110 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007488c0, 0xc0006b4900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 807
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 906 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0022a8600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 807
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2265 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002b4a680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002b4a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc002b4a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc002b4a680, 0x978e190)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2337 [chan receive, 2 minutes]:
testing.(*T).Run(0xc002b4aea0, {0x8198220?, 0x3005753e800?}, 0xc002b70340)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc002b4aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2b4
testing.tRunner(0xc002b4aea0, 0x978e1c8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2320 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002b4a000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002b4a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc002b4a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc002b4a000, 0x978e1a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1461 [select, 106 minutes]:
net/http.(*persistConn).writeLoop(0xc0024d10e0)
	/usr/local/go/src/net/http/transport.go:2443 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1475
	/usr/local/go/src/net/http/transport.go:1799 +0x1585

                                                
                                                
goroutine 2334 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0020081a0, 0xc0021d6198)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2263
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 915 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000a75f50, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x92aa1e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0022a84e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007488c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a6c270, {0x979a240, 0xc000a501b0}, 0x1, 0xc0006b4900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a6c270, 0x3b9aca00, 0x0, 0x1, 0xc0006b4900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 916 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x97bcc40, 0xc0006b4900}, 0xc00243e750, 0xc002205f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x97bcc40, 0xc0006b4900}, 0xc0?, 0xc00243e750, 0xc00243e798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x97bcc40?, 0xc0006b4900?}, 0xc00243e7b0?, 0x6c1e298?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00243e7d0?, 0x6813e44?, 0xc0006b40c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2357 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000234680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000234680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000234680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000234680, 0xc0028fa480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2354 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002009860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002009860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002009860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002009860, 0xc0028fa280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2359 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000235380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000235380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000235380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000235380, 0xc0028fa580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2334
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1201 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc0029ff080, 0xc002a9c240)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1184
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2264 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000724820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002b4a4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002b4a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc002b4a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc002b4a4e0, 0x978e180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1938 [syscall, 96 minutes]:
syscall.syscall(0x0?, 0xc0024a11d0?, 0x6742025?, 0xc0024396b0?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc0024396f0?, 0xc0022d1180?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1926
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 1379 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc00205bce0, 0xc0006b4b40)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1378
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1422 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc0020b91e0, 0xc002903c20)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 794
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2489 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0020582c0, 0xc0029131a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2486
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2491 [IO wait]:
internal/poll.runtime_pollWait(0x52379ad8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0022a8300?, 0xc0009aba00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0022a8300, {0xc0009aba00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002344080, {0xc0009aba00?, 0x51e37aa8?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021e80c0, {0x9798c48, 0xc0020ce050})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x9798d88, 0xc0021e80c0}, {0x9798c48, 0xc0020ce050}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xa9d66c0?, {0x9798d88, 0xc0021e80c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000235d40?, {0x9798d88?, 0xc0021e80c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x9798d88, 0xc0021e80c0}, {0x9798d08, 0xc002344080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0028fa680?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2339
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                    

Test pass (172/211)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 29.04
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.36
9 TestDownloadOnly/v1.20.0/DeleteAll 0.63
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.28.4/json-events 22.6
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.32
18 TestDownloadOnly/v1.28.4/DeleteAll 0.64
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.37
21 TestDownloadOnly/v1.29.0-rc.2/json-events 22.82
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.35
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.63
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.37
29 TestDownloadOnlyKic 1.96
30 TestBinaryMirror 1.64
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
36 TestAddons/Setup 223.17
40 TestAddons/parallel/InspektorGadget 10.88
41 TestAddons/parallel/MetricsServer 5.81
42 TestAddons/parallel/HelmTiller 10.5
44 TestAddons/parallel/CSI 60.95
45 TestAddons/parallel/Headlamp 13.5
46 TestAddons/parallel/CloudSpanner 5.66
47 TestAddons/parallel/LocalPath 53.89
48 TestAddons/parallel/NvidiaDevicePlugin 5.63
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.1
53 TestAddons/StoppedEnableDisable 11.65
64 TestErrorSpam/setup 21.7
65 TestErrorSpam/start 2.02
66 TestErrorSpam/status 1.23
67 TestErrorSpam/pause 1.73
68 TestErrorSpam/unpause 1.87
69 TestErrorSpam/stop 2.82
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 74.26
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 40.06
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 10.32
81 TestFunctional/serial/CacheCmd/cache/add_local 1.57
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
83 TestFunctional/serial/CacheCmd/cache/list 0.09
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
85 TestFunctional/serial/CacheCmd/cache/cache_reload 3.47
86 TestFunctional/serial/CacheCmd/cache/delete 0.17
87 TestFunctional/serial/MinikubeKubectlCmd 0.52
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.68
89 TestFunctional/serial/ExtraConfig 39.81
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.07
92 TestFunctional/serial/LogsFileCmd 3.26
93 TestFunctional/serial/InvalidService 3.93
95 TestFunctional/parallel/ConfigCmd 0.57
96 TestFunctional/parallel/DashboardCmd 15.44
97 TestFunctional/parallel/DryRun 1.56
98 TestFunctional/parallel/InternationalLanguage 0.76
99 TestFunctional/parallel/StatusCmd 1.24
104 TestFunctional/parallel/AddonsCmd 0.27
105 TestFunctional/parallel/PersistentVolumeClaim 31.51
107 TestFunctional/parallel/SSHCmd 0.74
108 TestFunctional/parallel/CpCmd 2.43
109 TestFunctional/parallel/MySQL 111.09
110 TestFunctional/parallel/FileSync 0.39
111 TestFunctional/parallel/CertSync 2.39
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
119 TestFunctional/parallel/License 1.55
120 TestFunctional/parallel/Version/short 0.18
121 TestFunctional/parallel/Version/components 0.81
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.59
127 TestFunctional/parallel/ImageCommands/Setup 5.61
128 TestFunctional/parallel/DockerEnv/bash 1.73
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.35
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.6
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.8
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.09
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.97
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.21
139 TestFunctional/parallel/ServiceCmd/DeployApp 62.12
140 TestFunctional/parallel/ServiceCmd/List 0.44
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
142 TestFunctional/parallel/ServiceCmd/HTTPS 15
143 TestFunctional/parallel/ServiceCmd/Format 15
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
149 TestFunctional/parallel/ServiceCmd/URL 15
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
157 TestFunctional/parallel/ProfileCmd/profile_list 0.54
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.6
159 TestFunctional/parallel/MountCmd/any-port 11.67
160 TestFunctional/parallel/MountCmd/specific-port 2.13
161 TestFunctional/parallel/MountCmd/VerifyCleanup 3
162 TestFunctional/delete_addon-resizer_images 0.13
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestMutliControlPlane/serial/StartCluster 110.6
169 TestMutliControlPlane/serial/DeployApp 9.61
170 TestMutliControlPlane/serial/PingHostFromPods 1.39
171 TestMutliControlPlane/serial/AddWorkerNode 20.22
172 TestMutliControlPlane/serial/NodeLabels 0.06
173 TestMutliControlPlane/serial/HAppyAfterClusterStart 1.11
174 TestMutliControlPlane/serial/CopyFile 24.52
175 TestMutliControlPlane/serial/StopSecondaryNode 11.85
176 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
177 TestMutliControlPlane/serial/RestartSecondaryNode 33.61
178 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.2
179 TestMutliControlPlane/serial/RestartClusterKeepsNodes 168.85
180 TestMutliControlPlane/serial/DeleteSecondaryNode 12.04
181 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
182 TestMutliControlPlane/serial/StopCluster 32.89
183 TestMutliControlPlane/serial/RestartCluster 58.94
184 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.99
185 TestMutliControlPlane/serial/AddSecondaryNode 47.09
186 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.12
189 TestImageBuild/serial/Setup 21.65
190 TestImageBuild/serial/NormalBuild 4.76
191 TestImageBuild/serial/BuildWithBuildArg 1.18
192 TestImageBuild/serial/BuildWithDockerIgnore 1.1
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.04
197 TestJSONOutput/start/Command 36.87
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.6
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.61
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 10.84
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.84
222 TestKicCustomNetwork/create_custom_network 23.99
223 TestKicCustomNetwork/use_default_bridge_network 23.44
224 TestKicExistingNetwork 23.73
225 TestKicCustomSubnet 24.17
226 TestKicStaticIP 23.76
227 TestMainNoArgs 0.09
228 TestMinikubeProfile 51.09
231 TestMountStart/serial/StartWithMountFirst 7.69
232 TestMountStart/serial/VerifyMountFirst 0.38
233 TestMountStart/serial/StartWithMountSecond 7.69
234 TestMountStart/serial/VerifyMountSecond 0.38
235 TestMountStart/serial/DeleteFirst 2.06
236 TestMountStart/serial/VerifyMountPostDelete 0.42
237 TestMountStart/serial/Stop 1.55
238 TestMountStart/serial/RestartStopped 8.86
258 TestPreload 204.67
x
+
TestDownloadOnly/v1.20.0/json-events (29.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-515000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-515000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (29.042609592s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (29.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-515000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-515000: exit status 85 (357.552303ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-515000 | jenkins | v1.32.0 | 07 Mar 24 09:55 PST |          |
	|         | -p download-only-515000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 09:55:26
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 09:55:26.340919    9213 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:55:26.341201    9213 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:55:26.341207    9213 out.go:304] Setting ErrFile to fd 2...
	I0307 09:55:26.341211    9213 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:55:26.341406    9213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	W0307 09:55:26.341504    9213 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18239-8734/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18239-8734/.minikube/config/config.json: no such file or directory
	I0307 09:55:26.343233    9213 out.go:298] Setting JSON to true
	I0307 09:55:26.365045    9213 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3297,"bootTime":1709830829,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 09:55:26.365135    9213 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:55:26.387744    9213 out.go:97] [download-only-515000] minikube v1.32.0 on Darwin 14.3.1
	I0307 09:55:26.409217    9213 out.go:169] MINIKUBE_LOCATION=18239
	I0307 09:55:26.387966    9213 notify.go:220] Checking for updates...
	W0307 09:55:26.388033    9213 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 09:55:26.452106    9213 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 09:55:26.473296    9213 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 09:55:26.494203    9213 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:55:26.515320    9213 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	W0307 09:55:26.557151    9213 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 09:55:26.557573    9213 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:55:26.613546    9213 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 09:55:26.613692    9213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 09:55:26.712508    9213 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-03-07 17:55:26.702826675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 09:55:26.733798    9213 out.go:97] Using the docker driver based on user configuration
	I0307 09:55:26.733863    9213 start.go:297] selected driver: docker
	I0307 09:55:26.733877    9213 start.go:901] validating driver "docker" against <nil>
	I0307 09:55:26.734098    9213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 09:55:26.832818    9213 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-03-07 17:55:26.823840008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 09:55:26.833006    9213 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 09:55:26.836023    9213 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0307 09:55:26.836172    9213 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 09:55:26.857111    9213 out.go:169] Using Docker Desktop driver with root privileges
	I0307 09:55:26.878204    9213 cni.go:84] Creating CNI manager for ""
	I0307 09:55:26.878250    9213 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 09:55:26.878372    9213 start.go:340] cluster config:
	{Name:download-only-515000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:55:26.900035    9213 out.go:97] Starting "download-only-515000" primary control-plane node in "download-only-515000" cluster
	I0307 09:55:26.900077    9213 cache.go:121] Beginning downloading kic base image for docker with docker
	I0307 09:55:26.920842    9213 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 09:55:26.920920    9213 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:55:26.920971    9213 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 09:55:26.970741    9213 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 09:55:26.971016    9213 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 09:55:26.971172    9213 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 09:55:27.228622    9213 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0307 09:55:27.228650    9213 cache.go:56] Caching tarball of preloaded images
	I0307 09:55:27.228986    9213 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:55:27.250783    9213 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 09:55:27.250811    9213 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0307 09:55:27.846621    9213 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0307 09:55:46.884575    9213 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0307 09:55:46.884754    9213 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0307 09:55:47.434036    9213 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 09:55:47.434256    9213 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/download-only-515000/config.json ...
	I0307 09:55:47.434279    9213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/download-only-515000/config.json: {Name:mkb0e8adaf0d48686481c9c1f3ba731541ec2dcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:55:47.434574    9213 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:55:47.434862    9213 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	I0307 09:55:51.153674    9213 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	
	
	* The control-plane node download-only-515000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-515000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-515000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (22.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-440000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-440000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (22.602161872s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (22.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-440000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-440000: exit status 85 (322.26636ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-515000 | jenkins | v1.32.0 | 07 Mar 24 09:55 PST |                     |
	|         | -p download-only-515000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 07 Mar 24 09:55 PST | 07 Mar 24 09:55 PST |
	| delete  | -p download-only-515000        | download-only-515000 | jenkins | v1.32.0 | 07 Mar 24 09:55 PST | 07 Mar 24 09:55 PST |
	| start   | -o=json --download-only        | download-only-440000 | jenkins | v1.32.0 | 07 Mar 24 09:55 PST |                     |
	|         | -p download-only-440000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 09:55:56
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 09:55:56.745599    9295 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:55:56.745878    9295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:55:56.745883    9295 out.go:304] Setting ErrFile to fd 2...
	I0307 09:55:56.745887    9295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:55:56.746069    9295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 09:55:56.748284    9295 out.go:298] Setting JSON to true
	I0307 09:55:56.770395    9295 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3327,"bootTime":1709830829,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 09:55:56.770517    9295 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:55:56.792376    9295 out.go:97] [download-only-440000] minikube v1.32.0 on Darwin 14.3.1
	I0307 09:55:56.813000    9295 out.go:169] MINIKUBE_LOCATION=18239
	I0307 09:55:56.792495    9295 notify.go:220] Checking for updates...
	I0307 09:55:56.856096    9295 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 09:55:56.877113    9295 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 09:55:56.898422    9295 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:55:56.919582    9295 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	W0307 09:55:56.962272    9295 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 09:55:56.962766    9295 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:55:57.017364    9295 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 09:55:57.017499    9295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 09:55:57.115180    9295 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-03-07 17:55:57.105584778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 09:55:57.138742    9295 out.go:97] Using the docker driver based on user configuration
	I0307 09:55:57.138772    9295 start.go:297] selected driver: docker
	I0307 09:55:57.138782    9295 start.go:901] validating driver "docker" against <nil>
	I0307 09:55:57.138942    9295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 09:55:57.235113    9295 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-03-07 17:55:57.226263398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 09:55:57.235285    9295 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 09:55:57.238184    9295 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0307 09:55:57.238342    9295 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 09:55:57.259470    9295 out.go:169] Using Docker Desktop driver with root privileges
	I0307 09:55:57.280553    9295 cni.go:84] Creating CNI manager for ""
	I0307 09:55:57.280600    9295 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 09:55:57.280623    9295 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 09:55:57.280746    9295 start.go:340] cluster config:
	{Name:download-only-440000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:55:57.302383    9295 out.go:97] Starting "download-only-440000" primary control-plane node in "download-only-440000" cluster
	I0307 09:55:57.302406    9295 cache.go:121] Beginning downloading kic base image for docker with docker
	I0307 09:55:57.323424    9295 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 09:55:57.323533    9295 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 09:55:57.323635    9295 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 09:55:57.373819    9295 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 09:55:57.373988    9295 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 09:55:57.374007    9295 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 09:55:57.374013    9295 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 09:55:57.374022    9295 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 09:55:57.608312    9295 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 09:55:57.608343    9295 cache.go:56] Caching tarball of preloaded images
	I0307 09:55:57.608565    9295 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 09:55:57.630348    9295 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0307 09:55:57.630405    9295 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0307 09:55:58.208975    9295 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-440000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-440000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-440000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (22.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-092000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-092000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (22.81970631s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (22.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-092000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-092000: exit status 85 (345.322743ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-515000 | jenkins | v1.32.0 | 07 Mar 24 09:55 PST |                     |
	|         | -p download-only-515000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 09:55 PST | 07 Mar 24 09:55 PST |
	| delete  | -p download-only-515000           | download-only-515000 | jenkins | v1.32.0 | 07 Mar 24 09:55 PST | 07 Mar 24 09:55 PST |
	| start   | -o=json --download-only           | download-only-440000 | jenkins | v1.32.0 | 07 Mar 24 09:55 PST |                     |
	|         | -p download-only-440000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 09:56 PST | 07 Mar 24 09:56 PST |
	| delete  | -p download-only-440000           | download-only-440000 | jenkins | v1.32.0 | 07 Mar 24 09:56 PST | 07 Mar 24 09:56 PST |
	| start   | -o=json --download-only           | download-only-092000 | jenkins | v1.32.0 | 07 Mar 24 09:56 PST |                     |
	|         | -p download-only-092000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 09:56:20
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 09:56:20.682304    9377 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:56:20.682568    9377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:56:20.682574    9377 out.go:304] Setting ErrFile to fd 2...
	I0307 09:56:20.682577    9377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:56:20.682770    9377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 09:56:20.684181    9377 out.go:298] Setting JSON to true
	I0307 09:56:20.706075    9377 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3351,"bootTime":1709830829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 09:56:20.706166    9377 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:56:20.727600    9377 out.go:97] [download-only-092000] minikube v1.32.0 on Darwin 14.3.1
	I0307 09:56:20.749186    9377 out.go:169] MINIKUBE_LOCATION=18239
	I0307 09:56:20.727836    9377 notify.go:220] Checking for updates...
	I0307 09:56:20.770462    9377 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 09:56:20.792517    9377 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 09:56:20.814394    9377 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:56:20.835678    9377 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	W0307 09:56:20.878240    9377 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 09:56:20.878733    9377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:56:20.933654    9377 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 09:56:20.933786    9377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 09:56:21.035190    9377 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-03-07 17:56:21.025487041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 09:56:21.056448    9377 out.go:97] Using the docker driver based on user configuration
	I0307 09:56:21.056492    9377 start.go:297] selected driver: docker
	I0307 09:56:21.056507    9377 start.go:901] validating driver "docker" against <nil>
	I0307 09:56:21.056752    9377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 09:56:21.155557    9377 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-03-07 17:56:21.145295089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 09:56:21.155744    9377 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 09:56:21.158657    9377 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0307 09:56:21.158796    9377 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 09:56:21.180087    9377 out.go:169] Using Docker Desktop driver with root privileges
	I0307 09:56:21.200924    9377 cni.go:84] Creating CNI manager for ""
	I0307 09:56:21.200959    9377 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 09:56:21.200973    9377 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 09:56:21.201067    9377 start.go:340] cluster config:
	{Name:download-only-092000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:56:21.222914    9377 out.go:97] Starting "download-only-092000" primary control-plane node in "download-only-092000" cluster
	I0307 09:56:21.222957    9377 cache.go:121] Beginning downloading kic base image for docker with docker
	I0307 09:56:21.245083    9377 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 09:56:21.245163    9377 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 09:56:21.245227    9377 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 09:56:21.294385    9377 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 09:56:21.294547    9377 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 09:56:21.294565    9377 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 09:56:21.294570    9377 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 09:56:21.294579    9377 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 09:56:21.515596    9377 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0307 09:56:21.515630    9377 cache.go:56] Caching tarball of preloaded images
	I0307 09:56:21.515936    9377 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 09:56:21.537749    9377 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0307 09:56:21.537789    9377 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0307 09:56:22.120169    9377 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /Users/jenkins/minikube-integration/18239-8734/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-092000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-092000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-092000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-767000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-767000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-767000
--- PASS: TestDownloadOnlyKic (1.96s)

                                                
                                    
x
+
TestBinaryMirror (1.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-936000 --alsologtostderr --binary-mirror http://127.0.0.1:52313 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-936000 --alsologtostderr --binary-mirror http://127.0.0.1:52313 --driver=docker : (1.034013305s)
helpers_test.go:175: Cleaning up "binary-mirror-936000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-936000
--- PASS: TestBinaryMirror (1.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-556000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-556000: exit status 85 (195.196005ms)

                                                
                                                
-- stdout --
	* Profile "addons-556000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-556000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-556000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-556000: exit status 85 (216.091065ms)

                                                
                                                
-- stdout --
	* Profile "addons-556000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-556000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (223.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-556000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-556000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m43.170098165s)
--- PASS: TestAddons/Setup (223.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2z2zh" [61b3767e-312f-4ff8-8f00-6e22faa7954f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005179959s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-556000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-556000: (5.874403667s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.643331ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-9dclp" [d09ed04c-c3fd-4c84-bc50-93021108b2a0] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004640048s
addons_test.go:415: (dbg) Run:  kubectl --context addons-556000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-556000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.642169ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-64mxw" [f9470f90-6392-483d-987b-ba7e9536809e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005861397s
addons_test.go:473: (dbg) Run:  kubectl --context addons-556000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-556000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.766345395s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-556000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.50s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 15.334393ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-556000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-556000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7737428c-fda2-476b-a024-bd971888a7fb] Pending
helpers_test.go:344: "task-pv-pod" [7737428c-fda2-476b-a024-bd971888a7fb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7737428c-fda2-476b-a024-bd971888a7fb] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005133766s
addons_test.go:584: (dbg) Run:  kubectl --context addons-556000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-556000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-556000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-556000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-556000 delete pod task-pv-pod: (1.256459573s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-556000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-556000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-556000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c9a12bc0-4d34-49da-9ba5-d390dc337c14] Pending
helpers_test.go:344: "task-pv-pod-restore" [c9a12bc0-4d34-49da-9ba5-d390dc337c14] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c9a12bc0-4d34-49da-9ba5-d390dc337c14] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00447796s
addons_test.go:626: (dbg) Run:  kubectl --context addons-556000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-556000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-556000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-556000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-556000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.757659053s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-556000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-556000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-556000 --alsologtostderr -v=1: (1.491199253s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-rwkfd" [704a4d04-b41a-486b-9487-32e5900fe50f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-rwkfd" [704a4d04-b41a-486b-9487-32e5900fe50f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005600332s
--- PASS: TestAddons/parallel/Headlamp (13.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-tkjhh" [3f46f22f-a18a-4887-b54f-fc32324dcaac] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00454494s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-556000
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-556000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-556000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-556000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [650d537f-15ff-4029-b049-13d83ad84487] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [650d537f-15ff-4029-b049-13d83ad84487] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [650d537f-15ff-4029-b049-13d83ad84487] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003813178s
addons_test.go:891: (dbg) Run:  kubectl --context addons-556000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-556000 ssh "cat /opt/local-path-provisioner/pvc-92b93c05-1c50-4f56-a13c-7ad66a34e4a8_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-556000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-556000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-556000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-556000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.021406411s)
--- PASS: TestAddons/parallel/LocalPath (53.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fkrn7" [b655c021-7549-49bd-92c9-416eb776a852] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004284905s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-556000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-mkq4w" [a31918f0-83c7-4e58-b596-60104b9118c3] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003175616s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-556000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-556000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-556000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-556000: (10.923456282s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-556000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-556000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-556000
--- PASS: TestAddons/StoppedEnableDisable (11.65s)

                                                
                                    
x
+
TestErrorSpam/setup (21.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-466000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-466000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 --driver=docker : (21.702537905s)
--- PASS: TestErrorSpam/setup (21.70s)

                                                
                                    
x
+
TestErrorSpam/start (2.02s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 start --dry-run
--- PASS: TestErrorSpam/start (2.02s)

                                                
                                    
x
+
TestErrorSpam/status (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 status
--- PASS: TestErrorSpam/status (1.23s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (2.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 stop: (2.14652634s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-466000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-466000 stop
--- PASS: TestErrorSpam/stop (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18239-8734/.minikube/files/etc/test/nested/copy/9209/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.26s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-308000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-308000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m14.257507284s)
--- PASS: TestFunctional/serial/StartWithProxy (74.26s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-308000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-308000 --alsologtostderr -v=8: (40.063334996s)
functional_test.go:659: soft start took 40.063784946s for "functional-308000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-308000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 cache add registry.k8s.io/pause:3.1: (3.924490815s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 cache add registry.k8s.io/pause:3.3: (3.73456791s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 cache add registry.k8s.io/pause:latest: (2.65877884s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local1505278986/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cache add minikube-local-cache-test:functional-308000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 cache add minikube-local-cache-test:functional-308000: (1.072251987s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cache delete minikube-local-cache-test:functional-308000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-308000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
E0307 10:05:32.901355    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:05:32.907288    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:05:32.917429    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:05:32.937900    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh sudo crictl images
E0307 10:05:32.978164    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:05:33.058498    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:05:33.219705    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh sudo docker rmi registry.k8s.io/pause:latest
E0307 10:05:33.540514    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (391.49662ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cache reload
E0307 10:05:34.180768    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:05:35.462101    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 cache reload: (2.264395464s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 kubectl -- --context functional-308000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-308000 get pods
E0307 10:05:38.022702    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-308000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 10:05:43.143698    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:05:53.384429    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:06:13.865071    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-308000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.81222904s)
functional_test.go:757: restart took 39.812393747s for "functional-308000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-308000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 logs: (3.067821844s)
--- PASS: TestFunctional/serial/LogsCmd (3.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1069252186/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1069252186/001/logs.txt: (3.255160233s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-308000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-308000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-308000: exit status 115 (558.833309ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32338 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-308000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 config get cpus: exit status 14 (64.822867ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 config get cpus: exit status 14 (64.171442ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-308000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-308000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 12304: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-308000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-308000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (780.689608ms)

                                                
                                                
-- stdout --
	* [functional-308000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:09:10.799831   12175 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:09:10.800098   12175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:10.800103   12175 out.go:304] Setting ErrFile to fd 2...
	I0307 10:09:10.800106   12175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:10.800299   12175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:09:10.801771   12175 out.go:298] Setting JSON to false
	I0307 10:09:10.826727   12175 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4121,"bootTime":1709830829,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:09:10.826853   12175 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:09:10.847985   12175 out.go:177] * [functional-308000] minikube v1.32.0 on Darwin 14.3.1
	I0307 10:09:10.910960   12175 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 10:09:10.889825   12175 notify.go:220] Checking for updates...
	I0307 10:09:10.952670   12175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 10:09:10.994834   12175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:09:11.037043   12175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:09:11.099833   12175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	I0307 10:09:11.157938   12175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:09:11.180662   12175 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:09:11.181051   12175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:09:11.237603   12175 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 10:09:11.237773   12175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 10:09:11.346575   12175 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-07 18:09:11.336129593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 10:09:11.388950   12175 out.go:177] * Using the docker driver based on existing profile
	I0307 10:09:11.410041   12175 start.go:297] selected driver: docker
	I0307 10:09:11.410057   12175 start.go:901] validating driver "docker" against &{Name:functional-308000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-308000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:09:11.410158   12175 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:09:11.435142   12175 out.go:177] 
	W0307 10:09:11.456163   12175 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 10:09:11.477134   12175 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-308000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-308000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-308000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (758.500591ms)

                                                
                                                
-- stdout --
	* [functional-308000] minikube v1.32.0 sur Darwin 14.3.1
	  - MINIKUBE_LOCATION=18239
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:09:12.300066   12248 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:09:12.300344   12248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:12.300349   12248 out.go:304] Setting ErrFile to fd 2...
	I0307 10:09:12.300353   12248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:12.300574   12248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:09:12.302132   12248 out.go:298] Setting JSON to false
	I0307 10:09:12.325472   12248 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4123,"bootTime":1709830829,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:09:12.325569   12248 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:09:12.346990   12248 out.go:177] * [functional-308000] minikube v1.32.0 sur Darwin 14.3.1
	I0307 10:09:12.410004   12248 out.go:177]   - MINIKUBE_LOCATION=18239
	I0307 10:09:12.388996   12248 notify.go:220] Checking for updates...
	I0307 10:09:12.430869   12248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig
	I0307 10:09:12.473010   12248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:09:12.514985   12248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:09:12.556868   12248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube
	I0307 10:09:12.614967   12248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:09:12.652373   12248 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:09:12.652816   12248 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:09:12.709004   12248 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0307 10:09:12.709169   12248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 10:09:12.810257   12248 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-07 18:09:12.800011163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213279744 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0307 10:09:12.832291   12248 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0307 10:09:12.873940   12248 start.go:297] selected driver: docker
	I0307 10:09:12.873957   12248 start.go:901] validating driver "docker" against &{Name:functional-308000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-308000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:09:12.874030   12248 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:09:12.899175   12248 out.go:177] 
	W0307 10:09:12.922142   12248 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 10:09:12.944911   12248 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [63ad7859-7e7c-4e72-bf38-8c2dc059f192] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004574866s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-308000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-308000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-308000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-308000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [15cd70ea-4b04-4a48-a8c3-93d20cf61d6b] Pending
helpers_test.go:344: "sp-pod" [15cd70ea-4b04-4a48-a8c3-93d20cf61d6b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [15cd70ea-4b04-4a48-a8c3-93d20cf61d6b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003793297s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-308000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-308000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-308000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fe047f7a-b66e-4dc1-9341-7161a28f7190] Pending
helpers_test.go:344: "sp-pod" [fe047f7a-b66e-4dc1-9341-7161a28f7190] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fe047f7a-b66e-4dc1-9341-7161a28f7190] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004697915s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-308000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh -n functional-308000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cp functional-308000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd2176416658/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh -n functional-308000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh -n functional-308000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (111.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-308000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-gkwk2" [f8a481e1-9a4d-417f-8735-d970d9c9f0be] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-gkwk2" [f8a481e1-9a4d-417f-8735-d970d9c9f0be] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m48.003549246s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-308000 exec mysql-859648c796-gkwk2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-308000 exec mysql-859648c796-gkwk2 -- mysql -ppassword -e "show databases;": exit status 1 (115.229362ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-308000 exec mysql-859648c796-gkwk2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-308000 exec mysql-859648c796-gkwk2 -- mysql -ppassword -e "show databases;": exit status 1 (118.01963ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-308000 exec mysql-859648c796-gkwk2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (111.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/9209/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo cat /etc/test/nested/copy/9209/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/9209.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo cat /etc/ssl/certs/9209.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/9209.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo cat /usr/share/ca-certificates/9209.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/92092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo cat /etc/ssl/certs/92092.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/92092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo cat /usr/share/ca-certificates/92092.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-308000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 ssh "sudo systemctl is-active crio": exit status 1 (386.865036ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-amd64 license: (1.547545762s)
--- PASS: TestFunctional/parallel/License (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-308000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-308000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-308000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-308000 image ls --format short --alsologtostderr:
I0307 10:09:14.943588   12311 out.go:291] Setting OutFile to fd 1 ...
I0307 10:09:14.943865   12311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:14.943870   12311 out.go:304] Setting ErrFile to fd 2...
I0307 10:09:14.943874   12311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:14.944077   12311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
I0307 10:09:14.944662   12311 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:14.944753   12311 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:14.945237   12311 cli_runner.go:164] Run: docker container inspect functional-308000 --format={{.State.Status}}
I0307 10:09:14.996789   12311 ssh_runner.go:195] Run: systemctl --version
I0307 10:09:14.996865   12311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-308000
I0307 10:09:15.049760   12311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53033 SSHKeyPath:/Users/jenkins/minikube-integration/18239-8734/.minikube/machines/functional-308000/id_rsa Username:docker}
I0307 10:09:15.134493   12311 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-308000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/localhost/my-image                | functional-308000 | becdab5b7492e | 1.24MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-308000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-308000 | d2857b8f1d37a | 30B    |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-308000 image ls --format table --alsologtostderr:
I0307 10:09:21.498219   12353 out.go:291] Setting OutFile to fd 1 ...
I0307 10:09:21.498396   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:21.498403   12353 out.go:304] Setting ErrFile to fd 2...
I0307 10:09:21.498407   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:21.499505   12353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
I0307 10:09:21.500118   12353 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:21.500207   12353 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:21.500570   12353 cli_runner.go:164] Run: docker container inspect functional-308000 --format={{.State.Status}}
I0307 10:09:21.551776   12353 ssh_runner.go:195] Run: systemctl --version
I0307 10:09:21.551849   12353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-308000
I0307 10:09:21.605808   12353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53033 SSHKeyPath:/Users/jenkins/minikube-integration/18239-8734/.minikube/machines/functional-308000/id_rsa Username:docker}
I0307 10:09:21.690441   12353 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/03/07 10:09:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-308000 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"d2857b8f1d37ad6bd47d94dcedc19614e348381e341928b7ca8eb6b169c8cccb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-308000"],"size":"30"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"d058aa5ab969ce7b84d25e
7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"becdab5b7492e98673b63a3ccf28ffdfbc7553087d25388960ee1147fd42a295","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-308000"],"size":"1240000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-308000"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5
553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1
"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-308000 image ls --format json --alsologtostderr:
I0307 10:09:21.187233   12347 out.go:291] Setting OutFile to fd 1 ...
I0307 10:09:21.187924   12347 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:21.187933   12347 out.go:304] Setting ErrFile to fd 2...
I0307 10:09:21.187939   12347 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:21.188830   12347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
I0307 10:09:21.189499   12347 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:21.189619   12347 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:21.189998   12347 cli_runner.go:164] Run: docker container inspect functional-308000 --format={{.State.Status}}
I0307 10:09:21.244479   12347 ssh_runner.go:195] Run: systemctl --version
I0307 10:09:21.244554   12347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-308000
I0307 10:09:21.300758   12347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53033 SSHKeyPath:/Users/jenkins/minikube-integration/18239-8734/.minikube/machines/functional-308000/id_rsa Username:docker}
I0307 10:09:21.384409   12347 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-308000 image ls --format yaml --alsologtostderr:
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-308000
size: "32900000"
- id: d2857b8f1d37ad6bd47d94dcedc19614e348381e341928b7ca8eb6b169c8cccb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-308000
size: "30"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-308000 image ls --format yaml --alsologtostderr:
I0307 10:09:15.261564   12317 out.go:291] Setting OutFile to fd 1 ...
I0307 10:09:15.262298   12317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:15.262307   12317 out.go:304] Setting ErrFile to fd 2...
I0307 10:09:15.262313   12317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:15.263008   12317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
I0307 10:09:15.263644   12317 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:15.263731   12317 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:15.264098   12317 cli_runner.go:164] Run: docker container inspect functional-308000 --format={{.State.Status}}
I0307 10:09:15.315654   12317 ssh_runner.go:195] Run: systemctl --version
I0307 10:09:15.315728   12317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-308000
I0307 10:09:15.406795   12317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53033 SSHKeyPath:/Users/jenkins/minikube-integration/18239-8734/.minikube/machines/functional-308000/id_rsa Username:docker}
I0307 10:09:15.490120   12317 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 ssh pgrep buildkitd: exit status 1 (349.634585ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image build -t localhost/my-image:functional-308000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 image build -t localhost/my-image:functional-308000 testdata/build --alsologtostderr: (4.928770319s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-308000 image build -t localhost/my-image:functional-308000 testdata/build --alsologtostderr:
I0307 10:09:15.946375   12333 out.go:291] Setting OutFile to fd 1 ...
I0307 10:09:15.946655   12333 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:15.946660   12333 out.go:304] Setting ErrFile to fd 2...
I0307 10:09:15.946664   12333 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:09:15.946860   12333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
I0307 10:09:15.947444   12333 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:15.948093   12333 config.go:182] Loaded profile config "functional-308000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:09:15.948506   12333 cli_runner.go:164] Run: docker container inspect functional-308000 --format={{.State.Status}}
I0307 10:09:15.997908   12333 ssh_runner.go:195] Run: systemctl --version
I0307 10:09:15.997984   12333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-308000
I0307 10:09:16.048055   12333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53033 SSHKeyPath:/Users/jenkins/minikube-integration/18239-8734/.minikube/machines/functional-308000/id_rsa Username:docker}
I0307 10:09:16.129628   12333 build_images.go:151] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.139950573.tar
I0307 10:09:16.129711   12333 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0307 10:09:16.144578   12333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.139950573.tar
I0307 10:09:16.148768   12333 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.139950573.tar: stat -c "%s %y" /var/lib/minikube/build/build.139950573.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.139950573.tar': No such file or directory
I0307 10:09:16.148795   12333 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.139950573.tar --> /var/lib/minikube/build/build.139950573.tar (3072 bytes)
I0307 10:09:16.188760   12333 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.139950573
I0307 10:09:16.205251   12333 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.139950573 -xf /var/lib/minikube/build/build.139950573.tar
I0307 10:09:16.220791   12333 docker.go:360] Building image: /var/lib/minikube/build/build.139950573
I0307 10:09:16.220901   12333 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-308000 /var/lib/minikube/build/build.139950573
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:becdab5b7492e98673b63a3ccf28ffdfbc7553087d25388960ee1147fd42a295 done
#8 naming to localhost/my-image:functional-308000 done
#8 DONE 0.0s
I0307 10:09:20.742589   12333 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-308000 /var/lib/minikube/build/build.139950573: (4.521701733s)
I0307 10:09:20.742681   12333 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.139950573
I0307 10:09:20.759990   12333 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.139950573.tar
I0307 10:09:20.783946   12333 build_images.go:207] Built localhost/my-image:functional-308000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.139950573.tar
I0307 10:09:20.783974   12333 build_images.go:123] succeeded building to: functional-308000
I0307 10:09:20.783978   12333 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.553418735s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-308000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.61s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-308000 docker-env) && out/minikube-darwin-amd64 status -p functional-308000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-308000 docker-env) && out/minikube-darwin-amd64 status -p functional-308000": (1.06286713s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-308000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image load --daemon gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 image load --daemon gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr: (4.051153584s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image load --daemon gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 image load --daemon gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr: (2.309438378s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.350435955s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-308000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image load --daemon gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 image load --daemon gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr: (3.103474411s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image save gcr.io/google-containers/addon-resizer:functional-308000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 image save gcr.io/google-containers/addon-resizer:functional-308000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.094829379s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image rm gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
E0307 10:06:54.824855    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.668080373s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-308000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 image save --daemon gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-308000 image save --daemon gcr.io/google-containers/addon-resizer:functional-308000 --alsologtostderr: (1.101785204s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-308000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (62.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-308000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-308000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-zq8zh" [002adc52-8e4c-4975-9dc5-a94370308e9a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-zq8zh" [002adc52-8e4c-4975-9dc5-a94370308e9a] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m2.004427213s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (62.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 service list -o json
functional_test.go:1490: Took "430.969021ms" to run "out/minikube-darwin-amd64 -p functional-308000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 service --namespace=default --https --url hello-node: signal: killed (15.002727057s)

                                                
                                                
-- stdout --
	https://127.0.0.1:53278

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:53278
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 service hello-node --url --format={{.IP}}
E0307 10:08:16.765683    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 service hello-node --url --format={{.IP}}: signal: killed (15.004255371s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-308000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-308000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-308000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11754: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-308000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-308000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-308000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [66c6e1db-0809-40f3-b195-0d623cc62a21] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [66c6e1db-0809-40f3-b195-0d623cc62a21] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003121243s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 service hello-node --url: signal: killed (15.002439672s)

                                                
                                                
-- stdout --
	http://127.0.0.1:53348

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:53348
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-308000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-308000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11785: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "455.885054ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "87.255416ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "459.103483ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "137.232669ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3661964262/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709834935430996000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3661964262/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709834935430996000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3661964262/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709834935430996000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3661964262/001/test-1709834935430996000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.394436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 18:08 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 18:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 18:08 test-1709834935430996000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh cat /mount-9p/test-1709834935430996000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-308000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e0b78bee-67f6-4907-944b-2745e4b6a52e] Pending
helpers_test.go:344: "busybox-mount" [e0b78bee-67f6-4907-944b-2745e4b6a52e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e0b78bee-67f6-4907-944b-2745e4b6a52e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e0b78bee-67f6-4907-944b-2745e4b6a52e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004049385s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-308000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3661964262/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port477208637/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.27903ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port477208637/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 ssh "sudo umount -f /mount-9p": exit status 1 (378.482626ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-308000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port477208637/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1193664591/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1193664591/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1193664591/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T" /mount1: exit status 1 (456.847324ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-308000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-308000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1193664591/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1193664591/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-308000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1193664591/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-308000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-308000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-308000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (110.6s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-470000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0307 10:10:32.920379    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
E0307 10:11:00.604655    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-470000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m49.456097494s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr: (1.145581488s)
--- PASS: TestMutliControlPlane/serial/StartCluster (110.60s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-470000 -- rollout status deployment/busybox: (7.004931347s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-9gpnj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-jjmbh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-mxxf5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-9gpnj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-jjmbh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-mxxf5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-9gpnj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-jjmbh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-mxxf5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-9gpnj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-9gpnj -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-jjmbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-jjmbh -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-mxxf5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-470000 -- exec busybox-5b5d89c9d6-mxxf5 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (20.22s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-470000 -v=7 --alsologtostderr
E0307 10:11:35.579176    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:35.585243    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:35.595373    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:35.615459    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:35.655815    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:35.735928    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:35.896090    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:36.216233    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:36.856542    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:38.136958    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:40.697165    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:11:45.817726    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-470000 -v=7 --alsologtostderr: (18.819951508s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr: (1.402993453s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (20.22s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-470000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.114219057s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (24.52s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 status --output json -v=7 --alsologtostderr: (1.37754884s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp testdata/cp-test.txt ha-470000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000 "sudo cat /home/docker/cp-test.txt"
E0307 10:11:56.058365    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMutliControlPlaneserialCopyFile3270807135/001/cp-test_ha-470000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000:/home/docker/cp-test.txt ha-470000-m02:/home/docker/cp-test_ha-470000_ha-470000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m02 "sudo cat /home/docker/cp-test_ha-470000_ha-470000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000:/home/docker/cp-test.txt ha-470000-m03:/home/docker/cp-test_ha-470000_ha-470000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m03 "sudo cat /home/docker/cp-test_ha-470000_ha-470000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000:/home/docker/cp-test.txt ha-470000-m04:/home/docker/cp-test_ha-470000_ha-470000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m04 "sudo cat /home/docker/cp-test_ha-470000_ha-470000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp testdata/cp-test.txt ha-470000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMutliControlPlaneserialCopyFile3270807135/001/cp-test_ha-470000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m02:/home/docker/cp-test.txt ha-470000:/home/docker/cp-test_ha-470000-m02_ha-470000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000 "sudo cat /home/docker/cp-test_ha-470000-m02_ha-470000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m02:/home/docker/cp-test.txt ha-470000-m03:/home/docker/cp-test_ha-470000-m02_ha-470000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m03 "sudo cat /home/docker/cp-test_ha-470000-m02_ha-470000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m02:/home/docker/cp-test.txt ha-470000-m04:/home/docker/cp-test_ha-470000-m02_ha-470000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m04 "sudo cat /home/docker/cp-test_ha-470000-m02_ha-470000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp testdata/cp-test.txt ha-470000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMutliControlPlaneserialCopyFile3270807135/001/cp-test_ha-470000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m03:/home/docker/cp-test.txt ha-470000:/home/docker/cp-test_ha-470000-m03_ha-470000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000 "sudo cat /home/docker/cp-test_ha-470000-m03_ha-470000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m03:/home/docker/cp-test.txt ha-470000-m02:/home/docker/cp-test_ha-470000-m03_ha-470000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m02 "sudo cat /home/docker/cp-test_ha-470000-m03_ha-470000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m03:/home/docker/cp-test.txt ha-470000-m04:/home/docker/cp-test_ha-470000-m03_ha-470000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m04 "sudo cat /home/docker/cp-test_ha-470000-m03_ha-470000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp testdata/cp-test.txt ha-470000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMutliControlPlaneserialCopyFile3270807135/001/cp-test_ha-470000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m04:/home/docker/cp-test.txt ha-470000:/home/docker/cp-test_ha-470000-m04_ha-470000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000 "sudo cat /home/docker/cp-test_ha-470000-m04_ha-470000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m04:/home/docker/cp-test.txt ha-470000-m02:/home/docker/cp-test_ha-470000-m04_ha-470000-m02.txt
E0307 10:12:16.538831    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m02 "sudo cat /home/docker/cp-test_ha-470000-m04_ha-470000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 cp ha-470000-m04:/home/docker/cp-test.txt ha-470000-m03:/home/docker/cp-test_ha-470000-m04_ha-470000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 ssh -n ha-470000-m03 "sudo cat /home/docker/cp-test_ha-470000-m04_ha-470000-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (24.52s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (11.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 node stop m02 -v=7 --alsologtostderr: (10.781601629s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr: exit status 7 (1.072241008s)

                                                
                                                
-- stdout --
	ha-470000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-470000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-470000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-470000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:12:29.609704   13674 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:12:29.609999   13674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:12:29.610005   13674 out.go:304] Setting ErrFile to fd 2...
	I0307 10:12:29.610009   13674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:12:29.610186   13674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:12:29.610370   13674 out.go:298] Setting JSON to false
	I0307 10:12:29.610392   13674 mustload.go:65] Loading cluster: ha-470000
	I0307 10:12:29.610436   13674 notify.go:220] Checking for updates...
	I0307 10:12:29.610702   13674 config.go:182] Loaded profile config "ha-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:12:29.610720   13674 status.go:255] checking status of ha-470000 ...
	I0307 10:12:29.611122   13674 cli_runner.go:164] Run: docker container inspect ha-470000 --format={{.State.Status}}
	I0307 10:12:29.661188   13674 status.go:330] ha-470000 host status = "Running" (err=<nil>)
	I0307 10:12:29.661225   13674 host.go:66] Checking if "ha-470000" exists ...
	I0307 10:12:29.661478   13674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-470000
	I0307 10:12:29.711262   13674 host.go:66] Checking if "ha-470000" exists ...
	I0307 10:12:29.711557   13674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 10:12:29.711627   13674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-470000
	I0307 10:12:29.762688   13674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53523 SSHKeyPath:/Users/jenkins/minikube-integration/18239-8734/.minikube/machines/ha-470000/id_rsa Username:docker}
	I0307 10:12:29.847873   13674 ssh_runner.go:195] Run: systemctl --version
	I0307 10:12:29.852818   13674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:12:29.871005   13674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-470000
	I0307 10:12:29.921283   13674 kubeconfig.go:125] found "ha-470000" server: "https://127.0.0.1:53527"
	I0307 10:12:29.921313   13674 api_server.go:166] Checking apiserver status ...
	I0307 10:12:29.921356   13674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:12:29.943962   13674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2427/cgroup
	W0307 10:12:29.959854   13674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:12:29.959911   13674 ssh_runner.go:195] Run: ls
	I0307 10:12:29.964637   13674 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53527/healthz ...
	I0307 10:12:29.970710   13674 api_server.go:279] https://127.0.0.1:53527/healthz returned 200:
	ok
	I0307 10:12:29.970724   13674 status.go:422] ha-470000 apiserver status = Running (err=<nil>)
	I0307 10:12:29.970738   13674 status.go:257] ha-470000 status: &{Name:ha-470000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:12:29.970749   13674 status.go:255] checking status of ha-470000-m02 ...
	I0307 10:12:29.970996   13674 cli_runner.go:164] Run: docker container inspect ha-470000-m02 --format={{.State.Status}}
	I0307 10:12:30.021128   13674 status.go:330] ha-470000-m02 host status = "Stopped" (err=<nil>)
	I0307 10:12:30.021152   13674 status.go:343] host is not running, skipping remaining checks
	I0307 10:12:30.021161   13674 status.go:257] ha-470000-m02 status: &{Name:ha-470000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:12:30.021178   13674 status.go:255] checking status of ha-470000-m03 ...
	I0307 10:12:30.021462   13674 cli_runner.go:164] Run: docker container inspect ha-470000-m03 --format={{.State.Status}}
	I0307 10:12:30.071173   13674 status.go:330] ha-470000-m03 host status = "Running" (err=<nil>)
	I0307 10:12:30.071201   13674 host.go:66] Checking if "ha-470000-m03" exists ...
	I0307 10:12:30.071503   13674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-470000-m03
	I0307 10:12:30.121450   13674 host.go:66] Checking if "ha-470000-m03" exists ...
	I0307 10:12:30.121708   13674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 10:12:30.121756   13674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-470000-m03
	I0307 10:12:30.172293   13674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53627 SSHKeyPath:/Users/jenkins/minikube-integration/18239-8734/.minikube/machines/ha-470000-m03/id_rsa Username:docker}
	I0307 10:12:30.255147   13674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:12:30.272176   13674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-470000
	I0307 10:12:30.323099   13674 kubeconfig.go:125] found "ha-470000" server: "https://127.0.0.1:53527"
	I0307 10:12:30.323122   13674 api_server.go:166] Checking apiserver status ...
	I0307 10:12:30.323162   13674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:12:30.340264   13674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2266/cgroup
	W0307 10:12:30.356014   13674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2266/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:12:30.356074   13674 ssh_runner.go:195] Run: ls
	I0307 10:12:30.360234   13674 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53527/healthz ...
	I0307 10:12:30.364754   13674 api_server.go:279] https://127.0.0.1:53527/healthz returned 200:
	ok
	I0307 10:12:30.364774   13674 status.go:422] ha-470000-m03 apiserver status = Running (err=<nil>)
	I0307 10:12:30.364786   13674 status.go:257] ha-470000-m03 status: &{Name:ha-470000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:12:30.364800   13674 status.go:255] checking status of ha-470000-m04 ...
	I0307 10:12:30.365102   13674 cli_runner.go:164] Run: docker container inspect ha-470000-m04 --format={{.State.Status}}
	I0307 10:12:30.416364   13674 status.go:330] ha-470000-m04 host status = "Running" (err=<nil>)
	I0307 10:12:30.416388   13674 host.go:66] Checking if "ha-470000-m04" exists ...
	I0307 10:12:30.416633   13674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-470000-m04
	I0307 10:12:30.467401   13674 host.go:66] Checking if "ha-470000-m04" exists ...
	I0307 10:12:30.467641   13674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 10:12:30.467697   13674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-470000-m04
	I0307 10:12:30.517846   13674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:/Users/jenkins/minikube-integration/18239-8734/.minikube/machines/ha-470000-m04/id_rsa Username:docker}
	I0307 10:12:30.600923   13674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:12:30.618128   13674 status.go:257] ha-470000-m04 status: &{Name:ha-470000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (11.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (33.61s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 node start m02 -v=7 --alsologtostderr
E0307 10:12:57.498671    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 node start m02 -v=7 --alsologtostderr: (32.045751901s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr: (1.513202594s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (33.61s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.2s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.195344938s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.20s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (168.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-470000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-470000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-470000 -v=7 --alsologtostderr: (34.245344464s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-470000 --wait=true -v=7 --alsologtostderr
E0307 10:14:19.418908    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
E0307 10:15:32.917277    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-470000 --wait=true -v=7 --alsologtostderr: (2m14.467299816s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-470000
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (168.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (12.04s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 node delete m03 -v=7 --alsologtostderr: (10.893829478s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr: (1.025985077s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (12.04s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (32.89s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 stop -v=7 --alsologtostderr
E0307 10:16:35.576603    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 stop -v=7 --alsologtostderr: (32.679508801s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr: exit status 7 (214.342123ms)

                                                
                                                
-- stdout --
	ha-470000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-470000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-470000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:16:40.700492   14459 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:16:40.700772   14459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:16:40.700778   14459 out.go:304] Setting ErrFile to fd 2...
	I0307 10:16:40.700782   14459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:16:40.700967   14459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18239-8734/.minikube/bin
	I0307 10:16:40.701680   14459 out.go:298] Setting JSON to false
	I0307 10:16:40.701715   14459 mustload.go:65] Loading cluster: ha-470000
	I0307 10:16:40.701913   14459 notify.go:220] Checking for updates...
	I0307 10:16:40.702285   14459 config.go:182] Loaded profile config "ha-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:16:40.702305   14459 status.go:255] checking status of ha-470000 ...
	I0307 10:16:40.702691   14459 cli_runner.go:164] Run: docker container inspect ha-470000 --format={{.State.Status}}
	I0307 10:16:40.752850   14459 status.go:330] ha-470000 host status = "Stopped" (err=<nil>)
	I0307 10:16:40.752911   14459 status.go:343] host is not running, skipping remaining checks
	I0307 10:16:40.752926   14459 status.go:257] ha-470000 status: &{Name:ha-470000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:16:40.752963   14459 status.go:255] checking status of ha-470000-m02 ...
	I0307 10:16:40.753266   14459 cli_runner.go:164] Run: docker container inspect ha-470000-m02 --format={{.State.Status}}
	I0307 10:16:40.802278   14459 status.go:330] ha-470000-m02 host status = "Stopped" (err=<nil>)
	I0307 10:16:40.802308   14459 status.go:343] host is not running, skipping remaining checks
	I0307 10:16:40.802317   14459 status.go:257] ha-470000-m02 status: &{Name:ha-470000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:16:40.802337   14459 status.go:255] checking status of ha-470000-m04 ...
	I0307 10:16:40.802596   14459 cli_runner.go:164] Run: docker container inspect ha-470000-m04 --format={{.State.Status}}
	I0307 10:16:40.852039   14459 status.go:330] ha-470000-m04 host status = "Stopped" (err=<nil>)
	I0307 10:16:40.852063   14459 status.go:343] host is not running, skipping remaining checks
	I0307 10:16:40.852073   14459 status.go:257] ha-470000-m04 status: &{Name:ha-470000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (32.89s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (58.94s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-470000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0307 10:17:03.258255    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-470000 --wait=true -v=7 --alsologtostderr --driver=docker : (57.509382895s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr: (1.195511218s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (58.94s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.99s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.99s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (47.09s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-470000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-470000 --control-plane -v=7 --alsologtostderr: (45.672367086s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-470000 status -v=7 --alsologtostderr: (1.419576593s)
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (47.09s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.122566253s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-045000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-045000 --driver=docker : (21.653775186s)
--- PASS: TestImageBuild/serial/Setup (21.65s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-045000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-045000: (4.757796859s)
--- PASS: TestImageBuild/serial/NormalBuild (4.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-045000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-045000: (1.181566912s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-045000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-045000: (1.096801937s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-045000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-045000: (1.037660427s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-111000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-111000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (36.866161905s)
--- PASS: TestJSONOutput/start/Command (36.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-111000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-111000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-111000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-111000 --output=json --user=testUser: (10.844527948s)
--- PASS: TestJSONOutput/stop/Command (10.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.84s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-011000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-011000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (460.100242ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2b9a0108-d01e-465b-af18-8e3444e30338","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-011000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"784ca5aa-42cf-4ccc-b590-2b12a4765525","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18239"}}
	{"specversion":"1.0","id":"c2e03455-9c17-4f13-97b6-0dd07eca61f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18239-8734/kubeconfig"}}
	{"specversion":"1.0","id":"5326a855-c4cf-4f04-ae29-cc98c12d8ab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"df4f8a73-9253-438f-bd6e-2d3c78045376","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c34cc2c5-23f8-49bc-8614-02e2b1bb49da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18239-8734/.minikube"}}
	{"specversion":"1.0","id":"94c6bd53-553c-4186-aaff-d5118a077b53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3f6befcd-50c9-422b-8515-eb0611bc3c1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-011000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-011000
--- PASS: TestErrorJSONOutput (0.84s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-585000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-585000 --network=: (21.573963638s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-585000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-585000: (2.367727972s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.99s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-546000 --network=bridge
E0307 10:20:32.914425    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-546000 --network=bridge: (21.18183058s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-546000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-546000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-546000: (2.21136966s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.44s)

                                                
                                    
x
+
TestKicExistingNetwork (23.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-033000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-033000 --network=existing-network: (21.113818963s)
helpers_test.go:175: Cleaning up "existing-network-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-033000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-033000: (2.226859674s)
--- PASS: TestKicExistingNetwork (23.73s)

                                                
                                    
x
+
TestKicCustomSubnet (24.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-489000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-489000 --subnet=192.168.60.0/24: (21.740133446s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-489000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-489000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-489000
E0307 10:21:35.574730    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-489000: (2.378447915s)
--- PASS: TestKicCustomSubnet (24.17s)

                                                
                                    
x
+
TestKicStaticIP (23.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-584000 --static-ip=192.168.200.200
E0307 10:21:55.959131    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-584000 --static-ip=192.168.200.200: (21.128872212s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-584000 ip
helpers_test.go:175: Cleaning up "static-ip-584000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-584000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-584000: (2.396308404s)
--- PASS: TestKicStaticIP (23.76s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (51.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-839000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-839000 --driver=docker : (21.869955661s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-841000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-841000 --driver=docker : (22.523596094s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-839000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-841000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-841000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-841000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-841000: (2.423936895s)
helpers_test.go:175: Cleaning up "first-839000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-839000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-839000: (2.394775227s)
--- PASS: TestMinikubeProfile (51.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-446000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-446000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.684813762s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-446000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-461000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-461000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.692517556s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-461000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.06s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-446000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-446000 --alsologtostderr -v=5: (2.058712796s)
--- PASS: TestMountStart/serial/DeleteFirst (2.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-461000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-461000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-461000: (1.547248605s)
--- PASS: TestMountStart/serial/Stop (1.55s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-461000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-461000: (7.855817784s)
--- PASS: TestMountStart/serial/RestartStopped (8.86s)

                                                
                                    
x
+
TestPreload (204.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-075000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0307 11:10:33.025105    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-075000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (2m29.220670193s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-075000 image pull gcr.io/k8s-minikube/busybox
E0307 11:11:35.685399    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/functional-308000/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-075000 image pull gcr.io/k8s-minikube/busybox: (6.044636831s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-075000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-075000: (10.809278571s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-075000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0307 11:11:56.077068    9209 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18239-8734/.minikube/profiles/addons-556000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-075000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (35.767290126s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-075000 image list
helpers_test.go:175: Cleaning up "test-preload-075000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-075000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-075000: (2.48886362s)
--- PASS: TestPreload (204.67s)

                                                
                                    

Test skip (19/211)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 14.684407ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9d8ks" [ac500c7e-a798-4289-aaba-67b8b5d4c2c3] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005978971s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x2nn8" [0fa782aa-f206-45f9-8d4f-978c47e388bd] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005242095s
addons_test.go:340: (dbg) Run:  kubectl --context addons-556000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-556000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-556000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.410902158s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (17.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-556000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-556000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-556000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0bf45810-fd9e-4cd6-ac98-92fa3b580e69] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0bf45810-fd9e-4cd6-ac98-92fa3b580e69] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004914057s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-556000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.77s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-308000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-308000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-zlggj" [02130dcc-652c-46a3-a26e-804122a4efb7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-zlggj" [02130dcc-652c-46a3-a26e-804122a4efb7] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003447887s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.19s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard